IGF 2024 - Day 3 - Workshop Room 6 - DC-CRIDE & DC-IoT Age aware IoT - Better IoT

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> MODERATOR: Good morning.  We are about to start in 1 minute.  I don't hear anything.

>> Channel 5.

>> MODERATOR: Okay I hear myself now.  Jonathan, can you do a quick voice check?  Sonia, can you do a voice check?  You cannot unmute.  Jonathan, you should be able to -- can you -- 

>> JONATHON CAVE: I can't turn on my camera but at least I can speak.

>> MODERATOR: Sonia, we will check you too. Jonathan, you can now unmute your camera.

Shall we begin?  Can we begin?  Okay. 

 

Good morning, everybody.  Welcome to the session from DC-IoT and DC-CRIDE which is focused on age aware IoT and better IoT.  Can you hear me well in the room?  Good.  So this session will take us through the landscape of evolving technology and how it relates to how we deal with people socially in age.  How we can ensure technology serves people with specific focus on age.

 

So there are so many opportunities to make every day life more convenient, safer and more efficient.  But there's also threats that come with that.  And we want to get to the start of it.  This is why Dynamic Coalitions throughout the year explore how to develop good practice in the best possible way and address risks such as unwanted data processing, provision of information, inappropriate or even harmful to individual or the processes that are based on false assumptions.

 

And one of the ways to counter those risks is by categorizing the users.  If the devices in this room can categorize users the internet of things can evenly adapt accord together needs and the specific measures to serve that user.

 

So this is why Utah and I discussed coming together with the two Dynamic Coalitions and coming together.  A little for the DC-CRIDE.  Can you share that, Utah?

>> UTAH: Yes, of course I can share that.  And you gave me the perfect segue to that when you mentioned evolving technologies.  Because with the Dynamic Coalition of Children's Rights, we are talking about the unconventional and the rights of the child.  And I do think both Dynamic Coalitions started their work very early in the process of the Internet Governance for children's rights and then was called the Dynamic Coalition on Safety.  And I think IoT started the same year as well?

>> MODERATOR: '28.

>> UTAH: So long standing in a certain way.  And years later when at the came out which was dedicated to children's rights in the online effort we renamed it to children's right or CRIDE, like Martin has already said.  And we found similarities in the work that we are doing.  And also in the objectives that we want to achieve.

 

Because we know that children are the early adaptors of new technologies and emerging technologies.  And that's the case where we have to look whether their rights are insured and where they can benefit from these technologies.  And IoT is one area that is very important, that can help people, that can help children to benefit from the environment.  And that's -- having said that I hand it over to you, Martin again.

>> MODERATOR: Thank you very much.  So basically the DC-IoT, they have been talking overtime.  How can they serve people global good guidelines.  What should be global guidelines should be adopted by people around the wore.  Because this technology is formed around the world.  And it's used everywhere.  So like the internet.  Internet of things doesn't stop at the border.

 

Very practically, because products come from all over the world.  But also because, for instance, the more mobile IoT devices like in cars, in planes, or what are you carrying with you when you travel crosses borders all the time as well.

 

So an understanding of global practice would also help governments take into account when they develop legislation being more aware of what consequences could be, what to think of. 

 

It could take business, global business could take it into account in the development -- design and development of devices and systems.  And by doing that from the outset, innovation can be guided by some insights, even when it's no law yet.  So good practice aims as developing IoT systems service, and taking ethical considerings into account from the outset -  Development phase, deployment phase and use phase and life phase of the life cycle and to find an ethical way forward. 

 

It's using IoT to create the environment with economic technological footprint that we want for us and future generations.  And when we talk about an internet we want with IoT, what we want is to be developed.  It's crucial that we really get that clear what that means for us.  And that we take action to make something happen there.

 

Because otherwise, still remembering serving the chair of the high level panel set last year in COVID, if we don't make the internet we want we may get the internet we deserve, and we may not like that.

 

So with that I really looked forward to discussions today for which we have a number of excellent speakers in the room and online.  And we will talk first about the data governance aspects that underline this.  Then we go into labeling a certification of IoT devices as that helps in transparency of these devices and what they can do and empowers users to be more informed in their choices.

 

Every session so far I think I heard of word AI.  So let me mention it here.  Of course it has a fundamental impact on how IoTs work and how selections can be made to adapt that to abilities of people.

 

And then last but not least, in all of this, the kind of horizontal laser how do we develop capacity?  Because the IoT team is developed all over the world.  But to apply it locally you need to have local knowledge. 

 

So where does that come together?  How can we work on that and very happy to also have Sabrina here to talk more about that.  So with that I would love to give the floor to Jonathan cave Cave, who used to be a senior teaching fellow in economics at the University of work and the Tours Institute.  And former member of the Regulatory Policy Committee. 

 

Jonathan, can you dive into the data governance aspect and what IoT has to do with it.

>> JONATHAN CAVE: Thank you, Martin and thank you everyone for showing up.  This is a very important topic.  And I'm going to largely lymph these first remarks to matters dealing with data.  But one thing I want to point out is that this idea of evolving capacity applies not only to the technologies which are changing and collecting more and more data, but also applies to the evolving capacities of the individuals involved -- in particular --

>> MODERATOR: Jonathan, can you improve your microphone?

>> JONATHAN CAVE: Not really.

>> MODERATOR: You are understandable but not great.  If you don't have an easy trick, let's continue.  Sorry.

>> JONATHAN CAVE: Let me just try --

>> UTAH: Closer to the device.

>> MODERATOR: Closer to the device may help.

>> UTAH: Now that you are close to the device it's better.

>> JONATHAN CAVE: I know -- actually, the device is attached to my ear.  So there's no way of going closer without changing my face geometry.  But I've switched to another microphone on my camera.

Is that better?

>> Yes, it's much better.

>> JONATHAN CAVE: Okay.  Thank you.  One can never tell with these technologies.  One of the thing I think is very interesting, is much of our lawyers and prescriptions around child safety is predicated around the idea of chronological age.  That people below a certain age should be kept safe and people above that age lose that protection. 

 

But of course, particularly as children have more and more experience of online environments and the people making the rules have less and less experience of the new technologies that, static perspective of protecting people on the basis of age may not be the most appropriate and we need to stay aware of that.

 

But most of all I think it's interesting to remember the data governance issues. One element of this is the data themselves.  It can be a source of either safety or risk and harm to young people. 

 

And the reason we care about that is both in terms of the immediate harm but in terms of the collective progressive Orion going harm that early exposure to inappropriate content, which includes manipulation of individuals by priming and profiling can expose in people which then changes the way they think as individuals or as groups.

 

Now in that respect, the question becomes which data should people have available to them?  One particular element of this is that we have a lot of privacy laws.  And many of these privacy laws set age lymphs for people's exposure to or able to consent to certain kind of data collection or processing.

 

Mostly these are predicated around what we could consider sensitive data.  But in the online environment, particularly social environments or gaming environments, many more data are collected, who's implications we only dimly understand.  And this is where AI comes in.  It's not obvious which data may be harmful and which data are not.

 

So instead of imposing rules and asking industry simply to comply with those rules, we may need answers increasingly in areas like online harm, we are moving in the direction of a sort of duty of care where we make businesses and providers responsible for measurable improvements and not for following static code.  So it's like harm reduction rather than compliance.

 

So there's a question about which data are collected.  There's also a more minor issue, which is that young people are exempt from certain kind of data collection.  But those may be the same data needed to assess either what their true chronological age is or their level of digital maturity.  It may be that some of the rules we have in place make it difficult to do too many things to keep pace with the evolving technologies.

 

Okay.  I think probably rather than going on I should turn over to Sonia at this point.  And I will make comment as -- when things come back up.

>> SONIA LIVINGSTON: That's fantastic.

>> MODERATOR: Thank you.  Yes, Sonia, please go on.

>> SONIA LIVINGSTON: Thank you for that and for the preceding remarks that did set the scene. I want to begin by acknowledging what an interesting conversation this promises to be because we are bringing together two constituencies those concerned with the internet of thing and those concerned with children's rights, that haven't historically talked together so much.  And I think it's valuable we are having this conversation now.  In the diagram of child rights and IoTer ands I think the overlap is currently relatively modest and I hope we can widen the domain of mutual understanding.

 

And I think -- and age assurance is a brilliant topic to illustrating both overlaps but also differences in perspective on this problem.

 

So I guess from a child right perspective, a starting points is to say that it's very hard to respect children's rights online or in relation to IOT if the digital providers don't know which user is a child.  So having that knowledge seems a prerequisite to respecting children's rights.

 

And yet as some of us have been investigating so far, it is far from clear that age assurance technologies as they currently exist do themselves respect children's rights.  So the fear on the side of many, is that we might bring in a non-child rights respecting technology to solve a serious child rights problem.

 

And I think this is amplified, this challenge in relation to the internet of things, because now we are talking about technologies that embedded, that are ambient and that the users may not even know are operating or collecting their data processing and enabling their opportunities but also introducing risks.

 

So a child right landscape always seeks to be holistic.  So privacy is key as has already been said.  Safety is key, as has already been said.  But the concern about some of the age assurance solutions is that -- as Jonathan just said, they introduce age lymphs.  So there are also costs potentially to children's rights as well as benefits.

 

And it's that kind of holistic perspective that is crucial.  So it's always important we think about privacy, safety, if you like hygiene factors.  How do we stop the technologies introducing problems.  But we need to think about those also in relation to the rest of chin's rights.

 

What am I thinking of?  I'm thinking of consulting children in the design and may be of both the technologies and also the policies that regulate them.

 

I'm thinking of children's right to access to information and right to partial and to be a part of the digital world rather than excluded through age lymphs or perhaps through delegating the responsibility of children to parent, which means parent might exclude children.  We are seeing a lot of this in varieties parts of the world at the moment.

 

As Utah said I'm thinking about evolving capacities.  This is not just a matter of age lymphs.  Children are excluded and placed at risk, as it were.  This is a terrible binary, if that's where we are heading.  But we are also thinking about age appropriate provision for children who are users or maybe impacted by technology. 

 

They may not be the named user.  They may not be signed up to the profile or signed up to the service or paying for the service but they may be in the room, in the car, in the street, in the workplace, in the school.  You know they may be impacted by the technology.

 

And I'm thinking also about best interests.  That overall the balance should always be in the child's best interests.  That's what every state in the world, expect measure, has signed up to when it ratified the convention.  And I'm thinking of child friendly remedy, so when something goes wrong, children and adults have access to remedy. 

 

So I think a child's right approach brings a broader perspective but also one that is already embedded in and encoded in established laws, policies and obligations on institutions and states to ensure that these new areas of business respect children's rights during and as part of their innovative process.  So I will inwith the mention of child rights by design if you like to give a broader focus to questions by design and safety by design.  Thank you.

 

[ SILENCE ]

>> MODERATOR: I see no -- in the chat.  The camera disabled by the host.  No there's no comments here.  Christian?

>> REMOTE MODERATOR: I have one comment in the chat.  It's written by Gotby.  Two ways IoT can contribute to a better system for children is by prioritizing privacy and safety and devices designed for children such as smart toys and learning tools and using age appropriate interfaces and content filters to enhance usability and safety.

>> MODERATOR: Okay.  Thank you for that -- for that appropriate remark.  By the way, Sonia, first togetherness was children's choice on IoT's six years ago.

>> SONIA LIVINGSTON: Good to know.  Fantastic.  Good.

>> MODERATOR: Jonathan, please.

>> JONATHAN CAVE: Yeah, just a small followup.  Those are extremely useful rashes.  There is one thing I want topped say about the use of technologies to identify children's ages, whether they are chronological or let's call them digital ages, which is that of course these, like all other age verification technologies can be bypassed. 

 

And one particular concern we should have is that when these bypass approaches become generally used by children or by groups of children, there may be an adverse selection in the sense that those most likely to bypass those protects may be those most at risk.  Either as individuals or in the groups through which these practices share, disseminate.

 

I remember when I was growing up and our county was 21 for drinking alcohol.  And the neighbouring county was 18 and fake IDs were in wide-spread circulation among social networks.  So there is this issue about whether -- although the solution may be very good.  The path to the solution may be more harmful than where we start up now.  So, yeah.

>> MODERATOR: I think that's a very good point.  And we will take that later on.  So with that I would like to move on to the second -- Sonia you wanted to respond to that?

>> SONIA LIVINGSTON: Well I was just going to say -- thank you.  I was just going to say very briefly in response to Jonathan.  And thinking of a paper that I worked on as part of the EU consent and project also with Artemis (?) when we consult children and families they value those kind of work-arounds.  It provides a little flexibility and I might say I don't think a 5-year-old could get away with a fake ID to drink but perhaps a 19-year-old could.

 

And its that little bit of flexibility around hard kind of age lymphs that many families and children say is important to them in provide just that flexibility for where they know their child is a bit less mature or a bit more mature.  So encoding everything in legislation and technology can in itself take away some agency from families.  And I think that's a challenge to consider.

>> JONATHAN CAVE: I would also say it takes away some agency from the children themselves, who have to learn how to navigate these environments.  And there is this tension between asking whether the environment can be made safe and if not, whether denying access -- as for example in Australia, is the appropriate approach.  Or whether some more active form of engaged filtration. 

 

But then have you to move away from the binary, of course.  Because you do have to -- not only engage how Ma sure children are but provide them with curated experiences perhaps with parents control to be able to safely navigate these environments.

>> MODERATOR: So I think there's the legal limits and there's the practical limits and I think online tools can be more useful and practical to limits than online limits especially involving regulatory legislation.  Utah?

>> UTAH: Yes, how would we know from the GDP there is a hard-set limit -- it's between 13 or 16 but in every country it's a hard limit.  Either 13, 14, 15 or 16.  And what the European commissioner is going to discuss is more about age brackets, which would mean a certain age range, saying between 13 and 15 or 16 to 18 or something like that.  So that you have a diverse -- a bit of a range. 

 

And there comes in the issue of maturity.  So you might have a 13-year-old that is mature like a 15-year-old.  And a 15-year-old that is only like someone who is 14 or 11 or 12, something like that.

 

So we are not talking about an Compaq age threshold.  But about age brackets.  We get some of this flexibility for the children as well as for the family.  And it also plays into the concept of maturity.  So that it makes it more flexible.  Thank you.

>> MODERATOR: Yes, thank you very much.  Dorothy, please.

>> REMOTE MODERATOR: Dorothy wrote in the comments -- it's repeating a little bit what Jonathan said.  She wrote, denying access will always encroach teens to look at wore-arounds and as in the case of alcohol engage in dangerous behaviour because they have no guidance.  Why don't we put more information on media information so they understand how to protect themselves.

>> MODERATOR: Yes, thanks for that.  And also the brackets come into that as well.  Please?

>> QUESTION: So I want to speak on why

(Audio Difficulties)

Looking into all aspects of children is equally important when we are defining age on the accessible of technology for children. 

 

What category of children are we talking about?  Because it might just defer as per the point that -- you know their sense of understanding about technology.  And also their sense of understanding about using the technology.

>> MODERATOR: Thanks.  If you speak, please, raise your hand.  She will be speaking shortly on the AI as well.  Thank you for that.  So how can we help to help ensure that parents, children, environments know how IoT devices can serve them?  And that is the next topic we would like to dive into.

 

So basically having all of these goods from all over the world, having different capacities and having found in the past that -- for instance, the security of these devices is sometimes limited to none or to being set with password like admin.  It's not useful. 

 

Also we found that in the past some devices, for instance, sent data back to the factory or wherever to -- without users being aware of that or knowledgeable of that.  Now legislation asks for more of that clarity. 

 

At the same time, legislation is per country.  So how can we together get a good tool here that helps us to understand what we are actually buying, what we actually start using?  So from the IoT perspective we had a big discussion last year where it was made clear that labeling of devices and of services is crucial. 

 

And this labeling need to be even dynamic.  Because the upgrades of software and things.  The label may change.  So a label that can be linked to online repository would be crucial.  And certification of that, of course is important.  Because one could say anything there.  And how do we know that it's true.

 

So some certainty need to be built in.  And there's different certification schemes.  This is not the session to go very deep into the differences of those.

 

At this moment that labeling of certification schemes are also discussed around the wore and put in place.  For instance in the U.S. a framework has been put in place that also identified this in Singapore action has been takes place at our places.  And in Europe as well as part of the digit Digital Services Act.  And what we see is now the diplomatic go-sander beginning.  These countries are talking to each other about -- so how can we do it in such a way that we can recognize each other's certifications.  That the labels of other countries useful for us as well.  And this is the beginning.  But we are not there yet.

 

The deeper intent is labeling certification.  So labeling certification, how do I know this is correct.  It's basically it empowers users to make smarter choices.  So next to security we discussed last year it should be also about data security.  So whereas data -- where are data streams going?  And I think what came up over the years since more and more emphasis on how much energy does it look like IEEE is doing for electronic devices.  So what they can offer.  I'm very interested to hear your perspective on what this can do for age awareness and appropriateness.

>> UTAH: Thank you.  I want to talk a little bit why age assurance matters from a legal perspective as a starting point.  We have had laws for a very long time that require some form of age assurance for various content services and groups online and some of these laws have explicit requirements of age verification or age assurance.  Some of them are implied.

 

But in practice there is very little out there for decade, really.  We have had laws that have not been enforce properly because they have not been complimented with appropriate age assurance tools.  And the notable exception for that is probably online gambling where the law seems to have worked in some jurisdictions at least. 

 

The U.K. is a good example, the gambling commission.  But part of the reason is because it's not just about age assurance.  It's also about identity assurification.  For all of the cases when we looking at all state, and the EU we for there were few verification tools that can actually help implement legislation.

 

And content was the most problematic of all of that, for a variety of reasons, not just because there are cultural variations in Europe.  And even for children with the 18 and under category of people.  But there was a disconnection between the principal of self-regulation that the EU has advocated especially for content for people.  On the one hand with legislation such as the adoption of age assurance talks on the other.  So that has not led to a happy situation.

 

The law on the one hand required age assurance to protect children but the practical reality is there hasn't been any useful means of enforcing that legislation in practice.

 

There is a legal principal which suggests that if a law cannot be enforced, it's not unenforceable I unlikely to proceed to people bound to that session.  You can see a good example in copyright infringement online.  It's not because people don't understand infringing copyright -- because they can do it without consequence.  Unfortunately that is the case with most laws that require assurance as was formerly mentioned.

 

So what I'm trying to say is it's important that age assurance or effective age assurance compliments legislation for the legislation to work.  And that is a starting point.

 

Jonathan already have too many rules, too many laws so the answer is not to have more legislation.  The starting point is to make sure what we have enforceable and practice call.

 

Now things are changing especially specifically mandating age assurance and specific obligations on platforms and websites for noncompliance.  But it's not without problems.  The problem with age assurance, in my view, the fundamental problem, is a binary debate.  Essentially it's been a debate about children out and adults in.  And that's not how it should be.  Sonia already talked about it so children and youth cannot start everyone under the age of 18 in one category and use age assurance and have adults on the other side.

 

But there's also the other debate between adults who feel strongly about privacy and their ability to access the internet without any restricts and without any hurdles to go through.  And it being safe.  But that balance has also not been struck appropriately thus far.

 

And there's also the other issue of age assured for children for accessing content services or purchasing goods also across nations and across cultures and even within this country there are different cultural variations in that.  But the law does not always factor in the evolving capacities of children.  In that seven it fails children.

 

To take the example of the EU services director which first had a risk of harm to be the basis for adopt appropriate safeguards and measures for age assurance, is a good example.  Is it a principal that recognizes the former capacity for children if a child, say, for example -- for example an 18-year-old is very different than a 16-year-old.  But not on legislation that is With cultural variations for age assurance, even for what is generally perceived as harmful content for children like pornography.  Even within Europe, there are variations as to age assured for accessing important even under the sub-18 category.

 

So that's where I stand on age assurance laws.  I think we have toyed with self-regulation, especially in the online space for more than three decades and it hasn't worked.  And I think we need -- what I'm saying is we don't need new age assurance.  Those who already have age assurance. 

 

What we need is workable, enforceable age assurance legislation.  And like you said, labeling certification is -- it can be mandated by law or it could be a voluntary thing.  But they obviously have to go hand in hand and compliment legislation, because I do believe that pressures like certification and labeling gives more consumer choices, gives more autonomy and also children more autonomy.  But that cannot be a substitute for underlying legislation.  This is how I feel about it.

>> MODERATOR: Thank you for that.  Comments online on this subject?

>> REMOTE MODERATOR: There are discussions online but --

>> MODERATOR: Jonathan or Sonia, because they raised their hand.  So other comments?

>> REMOTE MODERATOR: There are comments but not to discuss --

>> MODERATOR: Thank you very much.  In that case, Jonathan please, your thoughts on this.

>> JONATHAN CAVE: Okay.  Thank you.  Thank you very much.  And thanks --

>> MODERATOR: I can't hear you right now.

>> JONATHAN CAVE: Am I still inaudible?

>> MODERATOR: One moment.  Yes, Jonathan?  No.

>> JONATHAN CAVE: I'm still inaudible?

>> MODERATOR: I see the technical section working at it.  I imagine it will be the same issue for her because something here in the room.  Can you say something.

>> JONATHAN CAVE: Something, something, something.

>> MODERATOR: We can't hear the online speakers.  We can't hear the online speakers.  One moment.

>> JONATHAN CAVE: Yes, no?

>> MODERATOR: We can hear you now.

>> JONATHAN CAVE: I will be very brief because of the technical delays.  I think one of the thing we learned with age verification in relation to pornography is that the very existence of the single market or the global use of these technologies makes it very difficult to sustain differences, even attempts to tackle the problem by regulating payment platforms because you couldn't regulate content users on the platforms sort of failed because the content was coming from outside the jurisdiction. 

 

And the fact that it was banned within the jurisdiction merely increased the profitability or the price of external supplies of this kind of potentially harmful content.

 

Another thing is that I completely agree that some mixture of self and co-regulation and formal regulation backed by a concept of a duty of care or harm-based regulation or principals based regulation is required to keep pace not just with the evolution of technology and people's understanding, but with how it reacts to existing -- let's say, bans and protections and regulations.

 

And the final point was to say that we should probably also be aware of the fact that certification schemes and other forms of high profile regulation can convey a false sense of security.  But by the same token it may be the case that some of the harass against which we regulate are harms which are no longer truly harmful because people are evolved away from the point where they are vulnerable to them. 

 

In that sense, I would just point out that in relation to disinformation, misinformation and  malinformation, and there's evidence that a lot of generations are less harmed than their unrestricted adult counterpart.  So it may be that some of the Liberals we worry about cease to be harms or are no longer appropriate to be tackled I by legal means.  Okay those are my comments.

>> MODERATOR: Thank you for, that Jonathan.  Sonia, please.

>> SONIA LIVINGSTON: Thank you.  I want to acknowledge the conversation in the Zoom come a time for the meeting identifying the range of stakeholders involved.  So, of course, maybe we should have said at the very beginning that in facing this new challenge of age assurance in relation to IoT, a whole host of actors are crucial.  They play a role. 

 

And there are difficult questions of balance which will be vary by culture.  So yes we need to impart children and make sure these services are child friendly, they speak to them, they are understandable by them.  We need to address parents. We need to address educators and involve media literacy initiatives in exactly this domain.

 

But there is -- I wanted to make two points there.  And one is in that -- we can only educate people, the public, in school and so forth insofar as the technologies are legible.  Insofar as people can reasonably be expected to understand and learn what are the farms, where the harms might come from, and then what are the levers -- what are the available resources for the public, the users, to address those?  And we are not there yet. 

 

So on the question of balance I think the spotlight for IoT is rightly on the industry and on the role of the state as Abolish said in bringing legislation.

 

And on that point we have been doing some work trying to make the idea of child rights by design real and actionable.  We have been doing some work with the industry, the stakeholder group that is kind of most active and innovative in this space. 

 

So I wanted to open up the black box because from the developers and the engineers, all the different professionals and all the different experts that making up the development of the technology, for the most part, most are not aware of the child user who may or may not be at the end of the process.  Most of them have a different kind of user in mind, which is often an individual, not a family that might share password and share technologies.  And by in large relatively invulnerable and resilient user rather than one with a range of abilities of children we have thought about.

 

So I think let's look into this notion of the industry and think about where are ethical principals, our duty of care, and our legal requirements and our child rights expectations, where will they land within a company, whether it's a smart startup that is completely hard pressed and just trying to get into the App Store and has no idea of these, all the way through to an enormous company that has a lot of trust and safety focus at a relatively unheard level in the organisation and a lot of engineers and inn innovators who are pushing forward without the knowledge that we are discussing today. 

 

So pointing to industry and government's regulating industry just opens up the next set of challenges about who and how is to be aware of all of these issues.

>> MODERATOR: Very well said.  And also an excellent segue toward our next seconds.  Because basically this is about -- maybe education of the equipment through AI.  And for sure the capacity building to parents, children and environments to which we will talk in the last session. 

 

I would like to invite Patricia to explain the role for AI that you see in this interaction.  And after that we will go to Jonathan Cave.  Thank you.

>> PATRICIA: Yes.  Thank you for that.  I think part of AI is putting I a lot of emphasis on children and the engagement on the devices.  In terms of impact I see both the positive and the negative in terms positive, because it's also a learning platform for many children who are maybe slow developers, watching videos and learning and building their own capacities is a good platform for them. 

 

But on the contrary, when I talk about technology and advancement of it and the impact on children, that is also a big -- you know a big challenge for them in terms of it -- that children are being given all the right to use their device

(Audio Difficulties)

Children are calling their participate and voice what they feel lie.  Engage with their device which is also giving them that expression of freedom to learn on the aspect, that okay the device is answering them back from the point that -- you know a child is asking a question.  But there speaker or the device is unable to understand that -- what the is age of the child over there?  So age assurance is a point over there that -- this is a 6-year-old child asking a question or a 13 or 15-year-old. 

 

So that gap is not being identified as of now.  So where I feel that -- you know these technological advancements is playing a very big role.

 

On the point that -- how it is leading to a neglect impact is that -- you know there is overdependence on these technological tools as well because for every small thing we are going for technology we are going to the device to solve that problem and that is also leading to somewhere -- the development of the skills in terms of the physical development of the child, the mental development of a child because we are totally dependent on what the device is responding to us.

 

In terms of standards, I feel that you know we need to have more defined standards where children have access to devices, and where the engagement of parent have a big role.  In terms of what -- we also talked about from the legislative part and also from the point that we need to have other stakeholders involved.  From the industry perspective that to follow these rules when any device is being designed from a child perspective. 

 

Because developers -- like what Sonia mentioned, you know developers are not able to figure out that -- you know the device has been designed from a perspective.  So that is thought to be engrained to the point that any technology which is by design need to be child friendly as we are advancing with technology now.

 

Of course reinforcing on the point, again and again that safety by design is a key concept and in the future for technology we looking into that point that all of these aspects are taken consideration that any child and every child is looked upon, irrespective of their age and their own skills and learning capabilities as well.

 

So coming from India I have experience a danger from observations of how technology is being used by children.  And also how it is misleading them in terms of their own developments and the engagement in the online spaces as well.  While it is in the space of online gaming, or it is in the space of -- you know social media interactions, because somewhere -- the internet of thing or the devices. 

 

When we see that there is a development on one aspect with technology.  The flip side of it on the misuse of technology.  So we need to keep the right checks and balances.  I think that's what has been coming again and again in the conversation as well.  That we need to have the right checks and balances over there.  Also because internet of things is quite an alien term when we talk about it in India.

 

It's sad but yes, some way or the other we need to also break down this concept of internet of thing to people to simplify the understanding of what exactly it means.  Like when we talk about trust and safety it is again an alien term.  Somewhere we are advancing with the technology in the technological tools.  Only with certain sectors of people who are involved into this whole game of designing and developing tools, but for the larger -- for the masses it is still an alien concept what internet of things is.  How the standards needs to be defined.  That is the missing gap when we talk about child friendly or safety by design as a concept.

 

Also now because somewhere the other technology has been knocking everybody's door.  So as a Smart device or a device in terms of a phone is in everybody's hand.  But in terms of having other gadgets is, again, more about what section of the society can afford it to engage.  But larger is more about the differential in the economic background of the families.  That it is more with the privileged second that they are encountering this problem and challenge about devices at the engagement over there. 

 

While the phone, a Smartphone is in everybody's house, I feel this is also global attention that a phone device -- which is in the capacity that any child can use, because there has been an emphasis about that we need to have more child locked devices as well. 

 

So I think I will stop there.  And with the last point that -- you know how data governance is playing a very big role over there because whenever are you setting up any device and you are giving out the data, are you also ending up giving a child's data, right.  So there, what is the governance about data?  The privacy aspect of children's data over there.

>> MODERATOR: Thank you very much.  Some good points made.  Jonathan, I will leave the floor to you.

>> JONATHAN CAVE: Okay.  Thank you.  And thank you for that discussion.  I just have a few points that I think may be slightly broader point I would like to introduce.  I'm an educator.  One of the thing that AI does is that it not only facilitates education in children's development in the ways that we normally understood it. 

 

But it also pre-empts or distorts it more than people think.  One of the things is the devices that children use learn about the child.  But in the process they also -- as it were to programme the child.  And they teach the child.

 

Now one of the things they teach them is to rely on the system for certain kind of things.  All of us have to an extent outsourced our memory to our AI devices.

And we will ask for things that in the past we would have thought about.

 

When a child searches for informing online or asks a question, in the past they would have had to read a book, for example.  They would have read thing that they weren't specifically looking for.  And they would have to think about them to develop an answer to the question.

 

If the AI get very good it's simply answering the question it was asked.  The educational aspect of that is somewhat lost.  And nature of the dependence of the child of the device becomes in a certain sense steeper.  The child becomes an interface that sets the device in motion. 

 

Now this is something we have to deal with.  We might say what we need to do is to prevent it.  But my student say to me with respect to the use of AI to write essays and so on.  It is a transferable skill that the world into which we grow will make use of these technologies and to learn how to use the technologies may be more important than learning to use the technologies to do the thing that we used to ask them to do.

 

So there is a question here, a deep question, I think, about asking what kind of experience is best for children to help them become the kind of adult who can successfully work in this environment?  And there are some technical thing we can do along the way, like developing specific or stratified large language moderate models or small language models for children to use or synthetic data or identical twins to put a sandbox around children using technologies.

 

But I think the general lesson is the technologies used in this way to serve people, if they are oriented toward serving past problems, and developers often tend to do that, they need to be required to think into the future about the consequences what have it is that they are doing.  And that requires a continuous conversation involving children, developers and parents and the rest of society. It doesn't just stop when the device is released into use.

 

And a final point on games, it's certainly true that games, particularly immersive online games have a kind of reality or grooms a person, that is greater than to assailants to a person.  It can cut in the ways that normal human to human contact doesn't always do. 

 

That we know that from neuroscience experiment as welt as normal experience which suggests that these games could be used actually to help people navigate this new world torque promote ethical development instead of sort of attenuating ethical and moral sensitivities among children.

 

And then the final thing, is there is a different -- and this was very compelling. It brought up with the experience from India. The technologies that are designed for and used for elites, whether they are privileges elates in the of money or trained elites that can navigate and trust the same benefits and the same technologies used by the masses.  When the costs drop, for example and the uses become different, evolve away from those that the developers originally intended.

 

So I think that is a fundamental issue that needs to be dealt with at the development and deployment level.  And the final thing to say is one of the thing AI can do is it can police the problems that AI creates.  So one of the things that one would expect a machine learning model, a neuronet model to do is to keep track of how these technologies are changing our children, and to respond appropriately.

 

So I don't know the solution to the problems created by AI is more AI.  I would hesitate to actually endorse that.  Because then we do give up our human agency.  But those were sort of my concluding thoughts on AI in this respect.

>> MODERATOR: Thank you very much for that.  Yeah.  That keeps on going and informing us.  A clear goal of being aware of dependency that may grow with AI.  The safety for design by kid from the outset, right, by design.  The warning for the digital divide and the human agency warning, like -- we may want to keep that in some way or another.

>> JONATHAN CAVE: The other thing I would just add to that is the victim that play is a child's work.  That the of engaging these technologies for play allows us to develop them in way that doing them in anger or for serious reasons do not. 

 

And there's some really interesting work going on at Oxford on the difference on play is a state of mind when we engage with technologies.  Okay.

>> MODERATOR: Thank you so much.Ed a person from the room?

>> REMOTE MODERATOR: No comment currently but there was discussion or a hint that it's not only necessary that children understand how IoT works, what about is the functionality behind it, but also parents must know how it works and how it could influence their children.  But maybe that's an aspect we can add to the next block too.

>> MODERATOR: Yes we will come back to that in the en.  Because not only about children and parent but also the environment like teachers baby-sitters but also social environments.  So we will take that back to the end.  Utah?

>> UTAH: You are learning from AI and with AI but AI is learning while we do it.

And let's make sure it learns it in the right ways taking into account the values that we share around the world.  And that may be not all values you have been getting from your parents.  But some values are shared like the inclusiveness and the recognition and valuing of diversity and human agencies and privacy I think.  These are some of the clear examples of values that we share.

 

And for new tools, new developments to be taken into account is one thing.  Let's also keep in our mind, so what if all the old stuff that is already out there?  And evolved? 

 

Now on standard we heard about legislation.  We heard about industry practice.  And in a way if we looking to -- for instance, safety for electronic shocks.  We got the IEEE standard.  It's global standards. 

 

For internet stats we have the IETF standards that set certain rules.  But they are voluntary.  But they are industry standards and they are adopted and at least agreed and discussed around the world.  And more of this is likely to come.  Utah, please?

>> If I may come in, since you got back to the standards.  I was considering that she was saying indication themes should be mandated by law.  That would be the first step to go further we are IoT, when we have that mandatory certification theme or labeling.  But then it also need to be accepted:

 

And one example we know for about 20 years now is Section 508 at that time demand or obliged the U.S. states administration to have accessibility as a precondition and procurement.  So from that time on, any product that was because by administration in the United States needed to be accessible for people with disabilities.

 

And that only caused the whole process of having a broad range of products that were certified to be accessible and also it brought the prices down.  The products got affordable because the administration was obliged to only buy those products that are accessible.  And if we could come to that stage, not online labeling and certification, but have it like a procurement precondition, that I do think would really help to bring forward the labeled IoT.  Do you understand what I mean?

>> MODERATOR: Absolutely.

>> It just came to my mind that it's a really good example that we would follow up with.

>> MODERATOR: Yes, I see Jonathan clapped his hands because we talked a lot about this.  And basically we got standards and legislation. 

 

The problem with legislation is it's per jurisdiction.  And then can you start to harmonize across jurisdictions and it takes time.  But at least if there's principals to the global -- they will be globally recognized you have something to go for.  And an organisation like IEEE, IETF, and ISO and others play an important role in that.  So very much -- I know Jonathan is much more of an expert in this than I am.  And I see he raised his hand.  Jonathan.

>> JONATHAN CAVE: Yes, thank you very much.  That's a brilliant point.  The idea of using public procurement as a tool, as a sort of compliment either to self-regulation or to formal regulation is, I think one that has worked in a number of areas.  One of the things it can do as Utah mentioned, is to set a floor under the kind of capability that we wish to provide a stimulus, an economic stimulus to.  That things have to be accessible.  They have to have certain capabilities.  But it can also create a kind of direction of competition.

 

So when you specify a procurement, part of it is the requirements that you put in, the proposed solutions have to provide.  The other part are the things on which you provide advanced scores.

 

So procurement tenders written appropriately can also stimulate innovation to come up with better and more effective solutions.  So there's that part, the kind of launching customer aspect that puts money into developments that might not yet have a market home, that might not yet be profitable for people to provide.  But which with certain development or certain economies of scale might become profitable. 

 

And you can do that without putting governments in the position of saying this is what we need.  Because governments are particularly bad at picking winners and specifying solutions.  But what they can do is to move the whole industry in the direction of providing these things.

 

So that also happens, and the fine point is, when it comes tight doing of standards within the procurement process, European standard although not developed by the EU, are often incorporated into public procurement tenders.  With the idea being that you have to show compliance with the standard or equivalent performance.

 

And that introduces into this market-based alternative to regulation something which looks like an outcome based or principals-based criterion.  So either you have to come apply or better still have you to show that you can do better.  And if you do that you harp he is the innovation of industry and indeed give the customer some say in the matter.  And not just negotiation or procurement officer within a government bureaucracy.

>> MODERATOR: Thanks for that.

>> JONATHAN CAVE: I think it's really profitable.

>> MODERATOR: A clear example is, for instance, for internet security, there are standards that back up the flaws in the current system, like the RPKI.  These are standards that can be adopted by

(Audio Difficulties)

Now these standards are again global, but they are voluntary.  For instance in the Netherlands the Dutch administration does include it in its tendering for services.  And with that, they ensure that their service providers, that I as a citizen can also go for those services.  Because the services get the basis. 

 

So that's one of the examples.  And, yes, if government isn't sure, at least they can help with the direction.  So Utah, thanks for raising it.  Jonathan, thanks for bringing it home.  And you wanted to compliment on that.

>> Yes, thank you.  I wanted to follow up on what you just said about laws mandating, labeling and certificate physician.  I say said laws could rather than must always recognize there are some instances where it's not possible.  One other thing to add is religion problems because partner assume that caregivers are educated and every child comes from a parent middle class household and that doesn't happen.  Especially when literacy rate let alone digital literacy rates and that kind of certification leveling might be an added protection for children.  Thank you.

>> You gave the perfect segue to handing over to Sabrina, I would say.

>> MODERATOR: Yes.  We also come with the remark from Dorothy online that says there's so many people who are not online yet.  And how do we make sure that they don't miss the boat?  So with the focus on -- well after what we can have technology do and developed with AI and standards.  In the entities about the people.  And how can we make sure that people use it as well.  Sabrina, please.

>> Thank you.  Good afternoon, everyone.  I kept quiet for the moment because I think it makes sense for me to come in at the very end just to compliment on the various aspects that have been mentioned already and how we can indeed build this bridge of information and technology we have to the end users which are of course in primary children and young people but not exclusively. 

 

Also partners, caregivers and teachers, but also not to forget the other stationer and the policy makers in the industry.  I want to come in with a concrete example representing the better internet for kid initiative funded by the UN commission and the digital programme. 

 

The UN aims to create a safer online environment for children and young people.  The goal in the European Union we have the better internet for kids plus strategy that is based on three core pillars, child protection, child participation and child empowerment.  So also what was mentioned already, we need to try to empower children and young people to become agents of change. 

 

But in order for them to do this they need us adults.  They need a safe and responsible space.  And that, of course is to promote.  It's the goal of the better internet for kids initiative to promote responsible use of the internet.  Protecting minors from online risks such as harmful and earlier content but also to provide resources for partner and for educators and other stakeholders to better support on aspects such as online safety and digital literacy.

 

And of course better internet for kids also addresses the very prominent topic of age assurance to ensure that children and young people engage with age appropriate content are now protected from harmful content.  And to give some concrete examples of materials that you will fin on the better internet for kids portal, just earlier this year,ing to with University of light in the Netherlands we published a mapping of age assurance, technologies and requirements. 

 

It's a very comprehensive report that gives an overview of different approaches of age assurance.  And you also see legal, ethical and also technical considerations that were picked up by my fellow panelists.  And just to scratch on some key areas, first of all a diverse approach to age assurance.  Really the approach of there is no one size fits all solution.  The crucial importance of privacy and data protection concerns that were also already highlighted by Jonathan and the plans balancing act F effectiveness and youth experience.

 

As I said this is a very comprehensive research report.  And similar to what was already mentioned about existing laws and policies we need to ensure that this knowledge, this information is translated in user friendly guidance so we also transmit this expertise and the knowledge we have to educators, to parent that are really, really crucial in this process.  And also how can we build capacity and make sure also this is properly implemented at a local and national level. 

 

And that's why also on the better internet for kid portal that you can find on better internet for kids.iopa.eu we very much put age assurance in the spotlight.  Specifically focusing on two stakeholder groups. 

 

First of all, the educators and families really to provide resources to help proper awareness raising but also knowledge sharing to foster digital literacy.  I think this was also a comment that was given in the chat earlier really how can we ensure better and proper media literacy education.  And that's why we developed an age assurance toolkit that includes age assurance explainers. 

 

And just to give you some examples of what you can find in the school to look at.  First of all explaining what is age assurance in the first place.  And I think it was also mentioned, other examples were given before, age assurance might be a typology or a term that is not so accessible for many people. 

 

That's why we also in this toolkit provide concrete examples of when age assurance comes into place and why is it so important and how it can actually protect children.  Think that's also important for careers and partner to understand why is this topic so important.  How can it protect my child.

 

And in addition to this, this is a resource.  I have a copy here.  So you can see it's a much lighter report.  We also designed this together with children and young people.  That's also what we really try to do in most of the work we are doing to work on these resources also together with the end users, because ultimately it's for them. 

 

So it's also very important to involve them in this process.  And I think it's always very eye opening.  Because I think we are very much used to certain terminologies that is quite self-explanatory for us.  But for some people it is not so accessible. 

 

So that's also important to really follow this co-creation process.  And owner, and Sonia touched up on the group, on the black box of the industry.  Of course the better internet for kids initiative we really try to enrich the conversation and also have industry and policy makers around the table when we discuss certain topics. 

 

So on the website we also have resources aimed at digital services providers to check their own compliance and assessment done in the form of a seven-day assessment to a manual questionnaire that we also develop with university in the Netherlands and really the aim is for the service provider to critically reflect on their services and how this may intersection with the protection of children and young people.

 

But what is important here to notice, of course that is only provides guidance so it's not a legal compliance mechanism.  And here again it was mentioned also before, it's not a one-size fits all when we talk about online service providers.  We talked about the gaming sector, social media search engines. 

 

So I think we also need to acknowledge diversity.  And maybe just to conclude, also to highlight on behalf of the European commission that of course a lot of focus and work is done here in this space.  And this is also complimented by the work are the European commission is doing on purification.  Following the risk-based principals with the EU member states is also developing a European approach to age purification.  In the context the European commission is preparing for an pricey for serving age certification solutions before the European identity wallet is offered as of 2026 in the European Union.

>> SONIA LIVINGSTON: The worthy groups like us say partner must do this.  Educator must do.  That of course we want them to.  But this is a major shift of innovation in commercial placing an obligation of on ordinary people and on the public sector. 

 

And it is -- you know I think -- this is yet conversations about regulation, certification, standard and obligation on industry are really so crucial.  Because otherwise the burden of making a profit object one side really does fall on those who are already extremely hard pressed.  Let's keep up the pressure on the industry without in any way undermining the argument that of course media literacy and public information and awareness are crucial.

>> MODERATOR: Yes, thank you very much for that.

Of course regulatory innovation is also a point that is happening.  We see it in the approaches.  I'm European too.  Build European commission for instance the AI Actin invites to come. 

 

What should we talk about.  What should we regulate?  How should the practice look like and regulation doesn't only need to be from the countries but it can also be in from industry and self-regulation.  Jonathan, yes, please?  I learned this from Jonathan.  So please, Jonathan.

>> JONATHAN CAVE: I wanted to applaud Sonia on a point, really.  Because a lot of these things and this whole discussion there is a transfer of responsible from industry -- well from the verifiers of the tech part of the industry Scott service providers and the com providers and the others who are already regulated and from them to us and to a stern extend society is being used a beta tester or alpha tester for these technologies they are spat out and the ones that succeed and the ones that don't. 

 

Maybe they grow regulatory structure around them to make them more robust but the irreversible changes that take place will announce have taken place.  And cannot be undone even if we later come to regret them. 

 

And so some elements of A, a precautionary principal, and B, an appropriate placement of responsible should be important.  And when I say appropriate placement, these things are uncertain.  So where responsible lies should be some mixture of being able to understand the uncertanties, being able to do something about them, and being able to in particular financially and politically to survive the did I running involved in getting from where we are now to a solution that we cannot only live with but can sort of accept and understand. 

 

And I think simply provide and protect or responding to industry by shoring up the crash barriers and so son, encourages industry to take less and less responsibility for the consequences of what it is that they do or to define them in narrower and narrower and more technocratic terms. And to say this is safe because in lab tests it work out safely.  We saw this with medicine.  This is why real word evidence and the use of drugs is so important. 

 

They may do randomized clinical trial but put them in the real world and they don't work like that.  So there need to be some way of shoring this up so industry at all -- and people in government are participates in something and not people on a predefined responsible.  Any way I think that was a great point.  Sonia.  Thanks for making it political as it may have been.

>> MODERATOR: Thank you so much for that.

We got to this lady.  Can you introduce yourself in the room?

>> Thank you and thank you for a very interesting which I ever unfortunately came a little late to.  But nevertheless -- my name is Helen Mason I'm from Child Health Inch in the Netherlands we provide children services 24-7 to young people by a various active channels. 

 

And I think two points really.  I would that we must exclusively society and the first line responders in these kind of discussions because when thing go wrong there they are the people that are I will talking to children and young people and dealing with reports of harms that have happened online and building the capacity of those frontline responders is absolutely crucial in being able to respond adequately and report and nowhere to report to have proper alliances and referral protocols with law enforcement, for example, with regulators. 

 

So our work at child help international is advances this particular aspect to make sure all of our members are well equipped to respond to all kind of inns that might occur online and we have much data that shows an increase, for example in areas like sextortion, not knowing where they should report.

 

Is there a crime committed, and what should they do next?  Should they delete the evidence, et cetera.  So having the front line responders capacity to respond is vile for us.  One more point I wanted to maybe as well is that the data that is generated by the child help lines themselves.

 

As a result the conversation those have with children and young people is really auto unique resource.  So I would really encourage all stakeholder to have a look at that information we collect.  It's around prevalence and help seeking behaviour and trends. 

 

But also the case material has a lot more information about the actual experiences of children and young people.  Of course this is all done very safely and working together with people like Sonia.  We can really use this information to feedback into policy and I would really incurrent all of the stakeholders to take a look at the information we are publishing online.  Thank you.

>> MODERATOR: Thank you very much for your remark.  With that, please.  Thank you.

>> Thanks for those comments, very useful, indeed, thank you.