The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> LUCA BELLI: Good morning, everyone. Okay. Can we start? Excellent. Good morning to everyone. And first of all, let me congratulate everyone for being so early here after a lot of parting, especially yesterday evening for many of us.
(Laughter)
So, congratulations for being here. And welcome to this session on A New Generation of Platform Regulations. My name is Luca Belli. I'm professor and my co‑moderator Yasmin Curzi, we are going to moderate this session today where we are going to speak about A New Generation of Platform Regulations. And let me just provide a little bit of insight on why we have organised this session and what is the goal, what are the goals of the discussion we are going to have today before I leave the mic to Yasmine to moderate the session. The ‑‑ let's say the consensus I think we are starting to see is that there is a need to regulate platform, but what ‑‑ the point on which there is no consensus yet and is how to do it in an effective way. So, what we have been doing over the past years is really to study a lot platform responsibilities in the context of this every year since 2014 actually, so actually we are almost 10 years old. Happy birthday next year.
And we have been understanding that there is an incredible influx of platforms not only on our life as users but on markets, on democracies and this enormous on human rights on democratic processes on market concentration. So we have been ‑‑ we have noticed the emergence of several type of approaches that aim at tackle this risks, the systemic risks. We have in the paper that we are going to launch today. We have tried to identify trends on this emerging frameworks especially analysing four different approaches, to some extent similar but with important differences. These four approaches are, of course, the European DMA and ASA that everyone is quite acquainted with, the Indian IT rules, the Chinese recommendations on ‑‑ Chinese on recommendations and Brazilian so‑called fake news bill. Why we have chosen this for frameworks because together they cover half of the world population, almost 50 per cent of the world population lives in either a Europe, China, India or Brazil so they are the most influential ones, but of course, they are not necessarily the best ones. It just has been a choice only to try to have the focus a little bit the starting the discussion while still covering a large portion of the world population.
I don't think that I will enter into the details of the study. I will just leave the mic to Yasmin to start introducing our discussion and our distinguished speakers. So Yasmin, the floor is yours.
>> YASMIN CURZI: Thank you so much, Luca. I would like to thank you all for being here, and also, I'm very pleased to be part of this Coalition. I think with this outcome is actually a reflect ‑‑ reflection of the best possibilities of, like, joining people from several parts of the world to draft ‑‑ to write about their experiences from their standpoint to ‑‑ about their initiatives of their countries regarding content moderation platform regulation, etc. So I think this collaborative work that's going to be soon in our page at the IGF website and I'm very pleased with the final outcome of it, and I hope that you enjoyed the paper. And also, if you have any feedbacks once you read or are interested in reading it, please send us emails and also if you are interested in joining the Coalition, those who aren't part of it yet, please feel free also to get in touch with me and Luca.
So, without more further ado, we have several people in short time here. I would like to introduce first Sam who is joining us from Brazil on Zoom. She's Director for the promotion of free speech at the special Secretariat for social communication from the Brazilian Government. Samara?
>> LUCA BELLI: Do we have Samara online? If we don't, we need ‑‑ maybe we can go directly to the next speaker ‑‑
>> YASMIN CURZI: I think she ‑‑
>> LUCA BELLI: Is she online?
>> YASMIN CURZI: Yes. Samara Castro is on Zoom?
>> Yes.
>> LUCA BELLI: Can we ask her to speak? Is she hearing us?
>> SAMARA CASTRO: Hi. You can hear?
>> LUCA BELLI: Hello.
>> YASMIN CURZI: Hi, Samara!
>> SAMARA CASTRO: Hello. I'm sorry. My Internet is very terrible today, as my English.
(Laughter).
So I tried to share a little of information about Brazil. Now it's night in Brazil, so it's very different to Japan, but I tried to share some information. Well, I hope you are doing well. I want to give you quick overview of what we will be discuss something my upcoming talk about how social media is regulating Brazil. Maybe if my Internet don't work, please advise because I can ‑‑
>> LUCA BELLI: It's working perfectly.
>> YASMIN CURZI: Yes.
>> LUCA BELLI: Can hear you well.
>> SAMARA CASTRO: Perfect. The current state of social media regulation efforts in Brazil, we are ‑‑
>> YASMIN CURZI: Oh, no.
>> LUCA BELLI: Okay.
>> YASMIN CURZI: Yeah.
>> LUCA BELLI: Let's try maybe to proceed with the next speaker while we fix Samara's connection.
>> YASMIN CURZI: Yes.
>> LUCA BELLI: So let me introduce Tatevik Grigoryan from UNESCO. As we all know, UNESCO has been leading an international effort on defining guidelines on platform regulations, so please, Tatevik, the floor is yours. You have a mic. Otherwise, you can use mine. Let me ask you to pass the ‑‑ there's one here. Sat good morning, everybody, and thank you very much tour ‑‑
>> TATEVIK GRIGORYAN: Good morning, everybody. Thank you very much for having me here. I know you would expect me to go into the details of the guidelines for the governance of digital platforms, but taf ‑‑ and it is true we're finalising the guidelines, but I would not like to go into the details of the guidelines for the moment. They have been very wide open consultations globally for months. So, I'm sure many of you are aware. Instead, I would like to talk about another framework which is kind of a step before the regulation or guidelines of the Internet and online platforms.
So, I would like to talk then about the concept of Internet universality, UNESCO's official position on the Internet, well, social media is based on the Internet, and social platforms. So, basically, UNESCO's position is that Internet should be universal globally and should be based on four principles and five categories of indicators, which is what we call rowmex. Romedicaid stands for Internet that is based on human rights, accessible to all, governed by multi‑stakeholder participation and addresses crosscutting issues such as gender equality, safety and security online, children's rights, sustainable development and environment.
So, these are basically indicators, a set of 303 indicators where with 109 core indicators, which allow the stakeholders and governments and other stakeholders to use these indicators to measure the development and the state of the digital environment based on this indicators and have a holistic understanding of the digital platform at the national level and make informed policy recommendations.
So, in this ‑‑ when we talk about regulating or governing platforms, we talk about ‑‑ we see the need to address issues such as, for example ‑‑ well, we talk about human rights, such as rights to ‑‑ for freedom of expression or access to information or the right to privacy or open content, open data. And many other issues. And these are issues that are addressed in this framework, and this framework, the use of this framework really helps the countries to have this comprehensive understanding and inform their regulatory process or digital transformations, strategies and also, I would also like to highlight the fact that this process is participatory and has in multistakeholder approach so which is essential for governing the digital environment, digital platforms. It brings together civic society, governments, academia, and private sector and initiatives dialogue throughout the process but hopefully also after the process that's allowing the stakeholders which would otherwise not necessarily be involved in this process of creating all these guidelines, giving them an opportunity to have their voice in formulating these recommendations and later on in implementing those recommendations.
This ‑‑ well, around the world, 40 countries are current currently inch implementing this frame and many have a serious impact, a positive impact, and actually Brazil is one ‑‑ the first country that implemented it and very successfully and ‑‑ and some of the recommendations were then later on recollected in the national law. And same for, for example, countries in Africa, for example, after this synagogue created observe country of Internet activity, for example, or put in place a (?) rights to the Internet and many other examples across the world. And, as I spoke about Africa, I was just discussing with our technical adviser, Simon here, on the Internet, (?) very often we hear from the countries that they take up the international regulations, but the local context ‑‑ and try to adapt but the local context is not necessarily taken into consideration when it comes to just adopting the international European, for example, acts, but the framework, the Internet universality framework gives an opportunity to take into the consideration the local context, having local ‑‑ the contextual, for example, indicators and crosscutting indicators and really give the chance of seeing the local context and also taking into consideration potentially cultural aspects of the state. And also, another point, which could be relevant for the topic of the discussion, so this framework also can provide the potential way of monitoring the impact and implementation and impact of the regulatory frameworks that can be put in place giving the opportunity of conducting the monitoring assessments before the ‑‑ after the frameworks are put in place having already given the holistic overview of the state of the play. So I'll stop here. I'm happy to contribute further. Thank you very much.
>> YASMIN CURZI: Thank you so much, Tatevik. I'm going to check if Samara Castro is available again.
>> LUCA BELLI: Okay. Let's proceed then.
>> SAMARA CASTRO: I think I'm available.
>> YASMIN CURZI: Okay.
(Laughter)
>> YASMIN CURZI: Samara Castro, the floor is yours.
>> SAMARA CASTRO: I am happy and hop honoured to be here. Today, I'm going to share personal view on the regulations in Brazil. First of ally, it's important to understand the current landscape of social media regulatory efforts in our nation.
We occupy a very important places in the next few years. We are leading the G20. We are heading the UN Security Council. We are presiding over the weeks. We are sharing the COP 30. So despite the challenge like an attempted coup or increased escalated violence, Brazil has taken these important rules. This is our chance to take significant steps forward to other nations. So it's important to point out what the difference brings of power by doing (?) to manage online platforms. Of course, I'm speaking as a government, as a (?) administration. So, the legislative branch is discussing two bills to ‑‑ two different bills, the bill 2630 and the bill 2370. The bill 2630 covers online platforms (?) including transparency, responsibility, due of care, risks, systemic risks. On the other hand, the bill 23‑7, which grill with the first, deal with controversial topics like COP rights and (?). The Executive Branch, we are focussed on three areas. We focus on contribute to this proposed bill, create a policy to support diversity and sustainable journalism. This involves guidelines of advertising ensuring government funds don't misleading or disinformation. Since the government, we want to lead by example. We are to develop a plan to combat false information about health policies and vaccines. Our approach to handling the information, this information start with vaccine topics. This is unique. We aim to use this to talk other issues too, like disinformation about science, disinformation about the institutions, anything like that.
We orchestrate a comprehensive range of actions that the Executive Branch can systemically undertake to counteract anti‑vax movement. We establish communications strategy, mobilise resource for response and clapration to leverage the possibilities. Enhance the government's outreach to the public. Lastly, we instruct public apps to provide accurate information. We also ensure these apps to maybe have a zero rating guarantee that our (?) features and integrity by design. We plan many steps for (?) to fight against the anti‑vax movement. We've made plans to talk to people, brought together (?) problems and work with tech companies. This a very different and ‑‑ a very different plan.
In reference to the judiciary branch, they are involved in complex case relates to the (?). This case have the potential to reshape the interpretation of the Internet frame. The goal is to reconsider how platforms are held accountable for their content they are host. Moreover, the court has set up a task force to combat misinformation on these platforms. In conclusion, Brazil's deeply commitment to forcing a safer and more transparency and accountability in Internet while also asserting our sovereignty and sharing information integrity. Thank you for your attention. Sorry for my English and connection. And I stand (?) the subject. Thank you.
>> LUCA BELLI: Thank you very much, Samara for the very deep overview in only five minutes of the initiatives, the many initiatives that Brazil is trying to lead and is trying to implement, and I guess that everyone here in light of notice especially those who participate to IGF that's actually somehow Brazil is back. People have noticed over the past year that there was a sort of withdrawal of Brazil from international scene, and I think it's quite evident this year that there is a comeback of a lot of Brazilian initiatives and Brazilians around. One of these initiatives is also of Brazil is recently to join the partnership on information democracy that Michael Bach here with us is leading and chairing and I have the honour also to be one of the members of the Steering Committee of the observe country. So Michael Bach, floor is yours to tell us a little more about what the partnership is trying to do.
>> Michael Bach: Good morning, everyone. Thanks for the introduction. It's great to be here. I'll spend a few minutes just to talk about what my organisation does and why it's relevant to the development of new regulations to check big tech and hold them to account. The ‑‑ I'll go back a couple of years to 2019 when a dozen or so democracies came together under the international partnership for information ‑‑ democracy and information. They came together to establish a set of principles that would guide some of their commitments to ensure that technology serves our democratic institutions and ensures credible information, which is a key cornerstone of democracies.
Today, with the ‑‑ with Brazil joining as you the 51st partnership member in August, it's a pretty robust group of democracies that ebb and flow over the years as we've seen in some democracy index.
But that partnership provided a mandate to create the forum on information and democracy, which is a civil society that initiative to implement some of the key priorities. We do this in generally two large buckets. On the one hand, there is the development of policy recommendations and then submitting those to governments, civil society, and companies as well, so utilize and integrate. We do this through a process that gathers experts within academia, research backgrounds, civil society, a range of disciplines and experiences to ‑‑ through Working Groups on key issues, and we've done one issue a year over the last four years, from info demics to transparency to the future of journalism and we're just kicking off a new round on the impact of artificial intelligence on democratic institutions.
These ‑‑ through these networks of researchers and academics, we're able to ‑‑ one of the points made earlier about understanding local nuance and importantly the downstream impact of regulations that may work in the north and may be have very different implications in the Global South, and so it's very important for us, I think, to incorporate those voices and this process is a good I wa of doing that. I by ‑‑ way of doing that. By way of example, one of the first processes that we undertook led by Meera Ressa on the info demics upwards of 60 other recommendations made their way into the DSA. I think it's a great example of a process that contributes to that. Now, another area that our organisation works on is one Luca is involved in, the observe country. This is one of the key missing pieces as we address information integrity. The aim of the observatory, much like the ICCPR for climate change is to establish a common understanding of the facts of what the latest scientific evidence indicates is the impact of these technologies on our democratic institutions and information, which is so importantly ‑‑ incredibly important to that. So the observatory was developed over the course of a year, driven by Sha Shanna zoo Beau and former chair of the OAZD and now we're realising that plan with a Steering Committee of eminent experts from around the world from all continents to drive Working Group discussions on key areas in this first round this year. And that will then resolve next year with a report and hopefully some dynamic way to interact with the information.
All of this to say is, it's a ‑‑ I think, a really interesting process by which there's a connection between government and regulators and with civil society and academics and researchers from around the world that results in some concrete change.
And because of the ‑‑ this sort of Coalition of democracies, we have an opportunity to work with them on providing technical assistance to understand the recommendations, how they might be operationalised, recognizing that some countries have much more resource and capability to do this than others, where you might have a parliamentarian in one country that has a staff of ten people and in another you're lucky if you have someone who can help with your scheduling. And so that's what makes it really interesting and really powerful. So thank you for the floor to be able to explain that.
>> YASMIN CURZI: Thank you so much, Michael. Now I would like to call professor Rolf Weber who is on Zoom with us. He's a professor of the University of Zurich, and will talk about DSA ‑‑ I'm sorry, DMA, the Digital Markets Act. Professor Rolf Weber is here with us.
>> ROLF WEBER: Yes, good morning. Can you hear me?
>> YASMIN CURZI: Yes, perfectly.
>> ROLF WEBER: Thank you very much for the invitation to contribute to this question at a very specific time, namely 2:00 in the morning.
(Laughter).
I have decided to particularly talk about, let's say, more academic aspects of platform regulation because I'm of the opinion that there is a need to take more deeply into potential checks and balances mechanisms which could provide a better legal economic framework for platform regulation. Therefore, I would like to address two pillars which have not been discussed deeply in the past, namely accountability and particularly observability. As everybody knows, accountability enKompasses the obligation of one legal entity to give account of and explain and justify the taking actions and positions to another person in an appropriate way. In the past, we often discuss transparency, but accountability also looks at the back side of the mantel at the responsibility side. Accountability concerns itself with power and power implies responsibility. This is an assessment which, in fact, has been considered by the EU Commission with ‑‑ to recent ‑‑ two recent acts which have been released, namely the DMA and DSA. The DMA looks at market structures and tries to (?) that monmystic approaches can be combated at least to a certain extent. DSA more addresses contractual terms, behavioral aspects of contractual relations. And the DSA also looks at the question to what extent platform providers can be obliged to provide information in a timely manner, how standards can be introduced that hold governing (?) accountable and to what extent sanctions can be implemented.
So, DMA and DSA in a more academic perspective are at least partly tackling the opening ‑‑ the black box problem of algorithmic decision‑making. Of course, not arguing that DMA and DSA are really perfect regulations. However, I do think that they have done a first step in ‑‑ to the right direction even if a first (?) court decision has supported (?) position that some specific provisions need not be (?) to (?). Just so complete my short intervention, I think accountability should also be supported by the concept of all the ability being an institutionalised mechanism for the verification of platform informational platform data. So far, Europe has not really much advanced a lot could be done in this field and academic research is available (?) principles could be implemented. My second pillar of discussion would be the concept of observability as a way of thinking about the means and strategies necessary to hold platforms accountable. Why observability incorporates partly in a similar way as transparency, it also deviates most importantly by understanding accountability as a complex dynamic social relation. Accountability should become a mechanism that can overcome the lack of sensitivity for fundamental power imbalances, strategical conclusions and false binaries between secrecy and openness.
The challenge is raised by platforms as regulatory structures need to be trust more broadly as I mentioned, beginning with the question of how large‑scale transnational environments heavily rely on technology as a mode of (?) can be assessed in so far the DMA has introduced gatekeeper organisations and practise (?). To what extent these legal obligations can be made fruitful in daily life. At least basic rules exist now on how people need to be treated on online platforms, how connections between participants are made in structures and which outcomes should be achievable.
Let me just finish my short intervention by saying that the principle of observability could also reflect some kind of acknowledgment that the volatility of platforms requires a continuous observation. In so far the concept of observability should be based on public interest as a normative horizon for assessing and regulating the societal challenges of platformisation. In the context of the public sphere, public interest enKompasses freedom of expression and freedom of information, fostering cultural and political diversity throughout the whole society. Only a (?) understood concept of benchmark could reasonably regulate platform behaviour and realise also targeted transparency. Thank you very much for your attention.
>> LUCA BELLI: Thank you very much, Rolf, for this brilliant explanation of these key elements of the DMA and DSA and providing a little bit of overview of DMA.
Now let's give the floor to Anita Gurumurthy, Executive Director of IT for Change, to switch from Europe through India and to start to understand a little bit what are the key problems that we need to address with regulation. Please, Anita, the floor is yours.
>> ANITA GURUMURTHY: Thank you. With your permission I'll focus on India towards the end, but the work that Michael and I do really kind of reflects it over the past few years, and I really, really like the idea of focussing on A New Generation of Platform Regulation, and I think the focus very much should be from a certain individual human rights‑centric approach to allow the structural societal democracy approach. And that would include, in my argumentation, the idea of democracy as well. The first thing we should look at is a broad sweep. If the (?) I don't know whether I pronounce it correctly, the French people in the room ‑‑ I don't know ‑‑ is about a future of society in individualisation, I think we need to look at concerns that are also international and global. So the debate needs to be rera framed not only for how rights is are meted by data, AI and platform technologies, what it means for individuals but also globally how interconnections between international economic law, social media regulation and the way Internet platforms work shapes certain things like international trade agreements, that then prevent local autonomy of the public and public authorities to screwednise algorithms so in the Global South if you're part of iPaq in (?) it's not about localization. I'm talking about the ability of public authorities to ‑‑ municipal authorities to say, okay, if this is where there was harm, then how do you open up and scrutinize where the harm really was? And this really not only undercuts consumers but also workers, for instance, you know, in terms of warehouse workers of amazon who are forced to wear these gadgets during their work. Then there is the sovereignty agenda, which, of course, everybody understands, because we're really talking about geopolitical issue where global study competition has really, really become contingent today on information warfare.
So, this is quite a large issue, I think, of human societies not being given the same rights and, therefore, I would in fact, argue that this is not about Internet fragmentation but it is about the division of societies into those who have rights and those who don't have rights. Of course, no one is sacred here at all. All of us are data points. But I wanted to say that one of the research systemic researchers that we are undertook a couple years ago in femmetec and menview apps really showed us how the GPLs applied to women in Europe very differently and how training data sets are actually coming from women in the Global South who download European apps apps. So what you're actually seeing a fragmentation that is completely untenable because we don't even know, to contest these rights because these are cross‑border issues.
So, I think ‑‑ with regard to social media, I would like to say that the way disabilities and invisibilities are very selectively mobilised and it's not about making something ephemeral and something very, very pronounced but actually using this in terms of the way the state COP works which you're already familiar with and the way in which this actually happens is really at the heart of the matter and this, I think, is very, very Antoinettecal to the way diversity are mobilised by big tech and completely what you see is an unaccountable regime of misogyny there.
So what I would like to say is we should explore let's explore democracy and like for instance how a collective can shape its own social preferences and collective preferences and rank certain priorities. For instance, yesterday, I was at a session where somebody from a community media organisation in Canada was bemoaning the fact that hundreds of local media are disappearing every day. So, what does this actually mean in terms of structuring not just the regulation of platforms but structuring society for the digitalisation in a certain way? And that really is about the idea of freedoms not only as individual choice but idea of freedoms as the structures that determine individual choice.
So, quickly, I want to just jump on and since the Digital Services Act has been mentioned, I want to mention one of the key weaknesses we see is that while it talks about mitigation measures, while it really talks about the possibility that duty of care can be placed on platforms, one missing piece is that you can only take on an individual company. As an individual, you can take on an individual company. As a group, you can take on an individual company. But it's simply not possible for you to take on an industry. You cannot take on this kind of architecture of impunity that exists out there. And I think that's really a problem because you can say that my human rights are affected because harms were perpetrated on me, but in each case, the cost to company is nothing. You just settle and go away. When you're paying fines, Google has been paying fines in the European Union for Kingdom come. Right? So this is a question of Kulpaability and reliability and if we keep shadow‑boxing we're not going to get anywhere. All this talk of altruism of big tech is a futile of exercise in the form of regulation and individualistic measures that really don't address integrity, information integrity. So we really do need to look at things that like in the AI act, for instance, I think that's a very good, I again, a European invention, I think what we really need to do is look at proactive public disclosures. I think proactive public disclosures of the technical parameters of how did you build the code? And these are not necessarily breaking open trade secrets. What you're doing is essentially, you know, knowing the knowable and also pricing open the (?) of unknowable. And I think that's what we need to do. And I think that is where the AI act when it is enforced and when it is operational will see some improvements.
Finally, I just want to talk about media pluralism, and I think that as a civil society organisation, it's important to understand this kind of rating and ranking that's done by this (?) called relevance. I'm supposed to watch a firm because it's relevant to me. You know? So it's not just about the cultural rights of seren dip Ty but IAEA this is about the information died that's being provided to everybody collapsed under this catch phrase called relevant is not going to work because this is about the enormous power and motives of and bar docks of choice. And pluralism might have us thinking about the equal valance of public interest media as to why must I only watch things that I want to watch? I should be subjected, I think, to things that I'm not uncomfortable watching, uncomfortable listening to. It's a very contentious debate, no doubt, and I do understand these problems. But it's like we all grew up in India on a diet of there's being one channel on television and we had to watch things on the farmer's feast although I was a creature of urban India. I have to understand those things. That's because you live in a diverse society.
I want to close with the Indian idea, less said about it, the better, but I think we've been through several iterations and the one that's most concerning is the 2023 amendment to the rules which was notified by the Ministry, which authorises a fact‑checking unit of the Central Government to identify content online and in respect of any business that the Central Government might deem fake, false or misleading and the unilateral rights and powers of the state to actually take this content down by approaching Internet service providers. Of course, civil society has protested. Some people have protested more vehemently but many of us are not able to protest. And I think I'll leave it there because I think we're talking about the state of democracy, and I suppose we need to read between the lines.
>> YASMIN CURZI: Thank you so much, Anita. And thank you all from the first panel for this such enlightened conversation that we're having here. I think we are able to see the main challenges of regulating platforms nowadays.
Also, I'd like to highlight that at our outcome this year we have another framework proposal for policy‑makers, this time for much more international approach to it, and last year, as Rolf highlighted in his speech also, we had another framework regarding transparency and accountability for platforms so if anyone is interested in this topics, also, access our page at the IGF website. These documents are available there.
So I'd like to open for KNS. We are super punctual way. We have ten minutes for questions, if anyone here in the room would like to pose one.
>> LUCA BELLI: We have already two. I guess introduce yourself.
>> YASMIN CURZI: Yes. Please.
>> LUCA BELLI: We have one, two, and three.
>> YASMIN CURZI: Let's try to keep to two minutes if possible.
>> Yeah. So this is ‑‑ I am from Türkiye, the observant. First, thank you all for your total very inspiring. I would like to give a quick example of how Internets accountability can be an issue and some certain countries where there's not a lot of incentive to protect citizens sometimes. In Türkiye we are conducting investigations on malpractices of big tech companies. And we find mass, mass fraudulent activities using AI and micro‑targeting in which 2 million Turkish citizens are impacted and we report on this that there is earning money out of scammers, phishing attacks and nothing really happens because there's no mechanism. There's no politicalality climate. There's no mechanisms to hold accountable in every single country. On the hierarchy of needs people don't think that digital issues and digital problems are important. What happens with the DMA and DSA is safe Internet is (?) at this point. So we have a lot of findings where we prove algorithmic biases. We prove operational but on top of it if you're in a certain other context, there are certain mechanisms in place to support actually platform accountability. So, my question would be, if your country is leading and is honest in terms of protecting citizens, how do you think you can leverage your power and your know‑how to actually support other countries, just so this issue doesn't become again a (?) and secondly, my question ‑‑ actually, no. I think that's it for now.
And I would also like to add a second thing that all of these regulations are potential to become also (?) of authoritarian governments. So this information was something that was meant to be a positive thing when it was sold, and now it became a method for totarian governments to suppress voices. So any governance is going to have a turn and it's going to somehow be a tool. It might be something about fraudulent stuff. It might be something about child protection. But ‑‑ so how do you think you are ‑‑ I mean, my question is, are you actually watching how these rules and regulations are transforming and being adapted across different countries and different needs? Otherwise, it might be a problem as well. Sorry, that was a bit long. Thank you.
>> YASMIN CURZI: Thank you.
>> LUCA BELLI: Can we just say the other two questions, and then we can have some comments. And I think the gentleman there was second. Yes.
>> Sorry. Tatevik referred to me in her presentation. My name is Simon el ills. I've been assessing the Internet under the ROMEX, about 20 countries so far. And I think the basis of the UNESCO regulations which is a correct one is there's a balance here. We all want the Internet should not be involved in child abuse and misogyny and, therefore, some control from government side is necessary. And especially with the very large open platforms of V‑lops and EU legislation. And other on the other hand, we know this should not be too much control which then becomes a threat to democracy.
The second state really is for many developing countries, they have no idea how to regulate, and they have no idea how to deal with these big countries, as Michael Bach clearly enunciated as well. So they are adopting these principles including the European laws without a sense of knowing what they're doing. And in at least one country ‑‑ I'm not going to name, but they adopted it, and then just as the legislation was about to come in force, they realised what they've adopted and they pulled back because they realised that this gave consumers rights which actually the government didn't really want to give them.
And for various countries in different stages of this and with different intentions behind it, so I think the UNESCO regulations are trying to do is create a basic starting point from which countries can negotiate with the platforms and some principles. And I think that helps the companies as well. Meta does not want to negotiate separately through 300 countries in the world about what things are doing, especially with small countries currently working, for example, in the Pacific, which have no idea and no resources to assess this kind of thing.
So I think there's a balance to be struck here. And I don't know how I think how that will work out. So, in the sense if you like to turn it into a question, it's very interesting to see how this would work out. And also, I think at least there's enough here through UNESCO, as I've hinted, we can identify through the ROMEX assessment in countries to what they need, where the gaps are in their legislation, whether human rights problems emerge and then we can at least begin to perhaps point them to sources which can help them to address those kinds of issues. Thank you.
>> Hi. Good morning. I run a consultancy that worked in public policy, public affairs. I'm trustny of the Internet watch foundation. Just briefly, I should say first, I accept that probably our perspective on these issues depends on the democratic situation of the countries that we live in. So, if we're in a sort of democracy we'll take maybe one view. If we're not in a democracy, we'll have other different challenges, which makes this quite hard.
But just briefly, some quick points. I think the issues that we face and largely the privacies being positioned by many as an absolute right rather than a conditional right and, effectively, beer faced with a challenge of pervasive privacy on attack of accountability and on democracy. I think we need to look at it in that way that has actually been used, has been weaponised by the tech companies. So deliberate choices of technology are being made to allow for applause ‑‑ so for example, by choosing to use end to end encryption to hide things like C‑Sam and then arguing that it's because a platform has chosen to use end‑to‑end encryption that when a government demands action on C‑Sam, that's an attack on encryption. When in reality it's because of the choice the platform made in the first place, deliberately, to give it plausible iability and sidestep that accountability. Of course, the ‑‑ also, trying to show a unified position on these topics, mainly through corporate capture of civil society, and enormous financial power in lobbying to overpower any opposing voices and frankly, they're framing the discussion to favour the position of the global platforms. For example, using it to end‑to‑end endescription privacy to focus on government surveillance and diversity from capitalism, which is quite clever because capital I'm is the bigger attack on privacy I at least in dome some Is, accepting it may be different elsewhere. And using fragmentation as a topic to justify homogeneous implementation on platforms which is really largely about the efficiency of ‑‑
>> YASMIN CURZI: I'm sorry to interrupt you.
>> LUCA BELLI: Ask your question.
>> The one request I'd make, though, is where ‑‑ Internet standards, a lot of choices that have been made Internet standards make it really hard for regulations to work, and it's a real problem that governments and civil society is largely absent from Internet standards fora and it will be incredibly helpful to have more voices there to sort of make counter arguments to the sort of tech sector to give the other point of view.
>> LUCA BELLI: Okay. I have the impression that all the questions were more comments than questions. So I think we can go ‑‑
>> YASMIN CURZI: Yes.
>> LUCA BELLI: ‑‑ to the next phase of the session.
>> YASMIN CURZI: So, I'd like to give the floor for professor Monika Zalnieriute from the University of Sydney and also the law institute of the Lithuanian Centre for Social Sciences. Please, Monika. Thank you so much.
>> MONIKA ZALNIERIUTE: Thank you so much, Yasmin and Luca for hosting and moderating this session and for having me here. Since I don't think we have so much time left, I'll just try to make a few points more generally on the new generation of platform regulations and I think maybe build in some ways of particularly Rolf's comments on EU regulation perhaps, and I ‑‑ also coming from EU myself, I think sort of it's good to somehow have a little bit of distance to our own regulatory initiatives that are always sort of praised around the world as being the most novel and most ambition and so on.
So, sort of the point I would like to raise is that perhaps the new generation of AI and platform regulation is not something really different or new from what we had before, and this I would argue comes from very proceduralist focus on a lot of our platform regulations. And what I mean by that is that there's a certain belief that procedures and safeguards in themselves are sufficient to counter the platform power to change the status quo, to provide safeguards against the abuses and the harms that we encounter.
So, the belief is very strong and it stems, I think, from the comparison, popular comparison comparing the big platform companies to states where we think that due process, rule of law and transparency and accountability, these kind of vague rule of law values would temper public power and we think that they will do the same, especially transparency, for example, would be very effective tool in dealing with platform power.
And I think that then from there, we focus so much on small procedures and audits and checklists and various things like that that we miss the bigger picture. And what I mean by this is that, for example, the DSA really articulates a lot of different procedures in many ways institutionalising a lot of private practises that were there already before, providing a sort of constituentlising the whole framework but what I want to say is that by focussing on procedures so much, we miss the large picture. So, for example, we rarely if at all and in the session as well, I noticed, we don't really discuss, for example, the US dominance or environmental degradation, exploitation of resources, extraction of resources, extraction ‑‑ exploitation of labour on a global scale. We don't really discuss that. So, for example, the new AI act in EU talks about AI only as a finished consumer product. Doesn't really talk about environmental issues at all. Doesn't talk about the labour exploitation. There's the colonial element in there that really needs to be brought to the front, but definitely it's not there.
So, I think that, by focussing so much on all the due process and due diligence and all these various proceduralist sort of frameworks, we really don't talk about how our tech companies contribute to the climate disaster and exploitation of climate ‑‑ sorry ‑‑ contribute to climate change and exploitation of resources around the globe and so on. And I think that it's really problematic because focussing on procedures that much as we do in our legal frameworks often it's dangerous because it gives the appearance of political neutrality, because we focus on procedures, sort of we're doing the right thing. That's how lawyers think. I think that's how we're trained to think often. So, therefore, we're doing the right thing and, you know, we're politically neutral, and that's really problematic and I think some people already commented here how various countries are adopting the same approach and so on.
But my argument is actually a bit stronger. I think that, even though we like to frame, for example, US and EU as really being in opposition or opposite ends of the spectrum and so on in regulation, I think in the end it's very similar. Platform companies are actually doing the same thing, both in Europe and in US. And though, yet we claim that EU is leading, its normative power is enormous and so on. What's the problem? There must be some problem where we're somehow legit mading the platform power that is there with all these small procedural rules that are there. And also language. The legitmizing effect of language, of constitutional values that are ‑‑ we are using in the private sort of self‑regulatory initiatives. And I have a colleague here who is on an oversight board as well and we're good friends. So there's a lot of that sort of legitimising effect, not only neutrality but also legitmizing. I think a lot of the new generation of platform regulations does exactly that. It sort of legitimises the existing order. So I don't want to say that we don't need new regulation for digital platforms. To the opposite, we do need them. However, I think that the current sort of model that we have or the current ‑‑ the latest wave of these regulations, they don't really challenge the existing status quo, which would be the perhaps somehow intervening with a business model or the legal foundations, how it is structured. We don't do that.
So, I would argue that they actually contribute to the institutionalisation of the current order ordered by the big tech companies while promoting a narrative that, you know, we are creating a new predictable and trustworthy online environment.
So to change that, we need something else, some more ambitious laws that would tackle the loopholes of the ‑‑ a lot of different ‑‑ what are they called? I'm sorry. ‑‑ harbour, safe harbour regimes and various other legal provisions like that that we actually let the big tech platforms just do their own thing without ever even sharing what exactly that is.
So, that's it. That will be my short contribution. Ander hope to discuss that with you.
>> YASMIN CURZI: Thank you so much.
>> LUCA BELLI: Thank you. Thank you very much, Monika, and also to remind those that we need to be critical and that we speak often about the bustle effect. Yesterday we had a very good part on the AI where at some point, some participants raised the idea that there is a Brussel effect. We can discuss the Brussel defect because we don't know if regulation works and sometimes we may think it's the best that we have but is not necessarily the best that we can have, and so it's much more interesting to be critical. I see a hand raised, but we are finishing this set of presentation and then you will be the first to speak. (?) has been working from the community on these kind of issues for several years, and he has a unique international and European perspective on this issue. So, please, Vittorio, the floor is Urs why.
>> VITTORIO BERTOLA: Thank you. I will deal with the (?). But I think that what you're saying is very important. The problem is that what we got to the DMA and the DSA is the maximum political reality we can get. The (?) doesn't exist in terms of presentation. So first something needs to happen in society before we get the European parliament on the same line.
But I mean, it's already quite important to get a proper implementation of the (?) 20 years of complete liberalism in integration in the European Union. And especially I wanted to very quickly go through what has been happening with the (?) because again it's very important to notice how this is already already being reordered by the platforms this so they find ways to use the impact of the law by working at the implementation level when the (?) is gone and there's not ‑‑ the media attention is mostly gone and there's no ‑‑ I mean, few people really care and are still watching what happens.
So, you might know that the DMA was approved last year. It entered in May. The first step was to identify the gatekeepers. So you need to get a list of the companies that are big enough to be regulated under the special laws, so they have at least of my billion in the European Union and they have 45 million ires. So in early September, like a month ago, the Commission finally designated the gatekeepers, and there were already some negative surprises in that. So we got a list of six companies which are expected Alpha, Apple, Meta, Microsoft. But there were some omissions. For example if you go through the list of the messaging, so (?) let me get the correct one, but it's just a ‑‑ WhatsApp messager (?) even if it meets the thresholds that are set by law, someone possibly Apple convince the Commission that maybe it's not so important (?) it's just one‑third of the European users so it's not (?). So the Commission decided to start another investigation and think for six months whether WhatsApp message will be included or not. And the same thing happened in other things. Some services were complete 0 mitted especially gmail. I don't now how you can argue that gmail is not a dominant email service in the form of independent ISS communication service. Bing, the only search engine that falls under the Google because they decided bing ‑‑ even if in numbers it's big enough, there is more market share. Still it's big enough.
And so, I mean, the point is that you have to care about the implementation and check what's happening and make sure that actually the designation is (?) and we are thinking we are open source community in Europe thinking whether we should challenge these but it's completely transapparent at the moment. There's no document so you cannot read why were the decisions taken and you cannot even read what the companies argued.
And there are some boring regulations coming out and this is important in general. You might have ‑‑ if you watch the American case Department of Justice against Google in the advertising industry, Google basically manipulated the bids for the advertising so they don't always award the advertising to the highest bidder. They have something they called long term value of the advertiser which is completely untransparent so I (?) even if the other has more money, I will give the ads to the other one. And again, they say oh, but this is now done by an AI algorithm so we don't know what it does. We're not able to tell you why it takes the decisions it takes.
So it's getting worse actually. AI ‑‑ and first Ng description was ‑‑ endescription was talked about not being accountable. They are using AI so they don't have to be accountable.
Finally, the only thing I wanted to say is that the implementation only goes ‑‑ also goes through technical (?), ITF, (?) messaging is going through (?), and again, you see all the same dynamics in which the big tech companies have (?) people to send and can influence the result and if you're not part of this circle inside us, you can just say something but you will possibly not, like happened to me yesterday. You will be asked to make the presentation at midnight Japanese time and of course, you do other things. You cannot be there at midnight because you just don't have the stamina maybe. While the big tech people can (?) time just for this process and be there when it's necessary. So you see all these exclusionary factors. Thank you.
>> YASMIN CURZI: Thank you so much, Vittorio. Now I'd like to call professor Alejandro Pisanti who is on Zoom with us. Professor is from UNAM. Is he on Zoom with us? The floor is Urs why.
>> ALEJANDRO PISANTI: Can you hear me well?
>> YASMIN CURZI: Yes.
>> ALEJANDRO PISANTI: Thanks for Ip Vitting me to this session ‑‑ inviting me to this session. I've been following it partly in text because Zoom put me in two simultaneous meetings plus one that was past midnight last night so at 3:00 a.m. So I'll be very brief here, very concrete. I'm going to bring a complementary angle. This is not a position to what I've heard rather complimentary. I fear that the approach is based on the Brussels approach on ‑‑ based on loss and more loss ‑‑ laws and laws and more regulations, can have a very limited effect on what we find harmful that's happening on platforms. It goes very much to the anatomy where the problem is coming from the physiology of things, the way things actually work. Some of what we see, like disinformation, cyber crime, cyber‑crime and so forth amplify, may cross borders and so forth across platforms but in the end we are not touching the origin of the undesirable and trafficly harmful things ‑‑ frankly harmful things.
Briefly, I think we should in parallel look at things proposed for map something things sort of online and offline. We need to look separately at the hyper scale or mass scale of the Internet identity management where we know that the Internet doesn't really give you an tiedty, doesn't come from the identity layer and therefore you have anonymity and so forth. You have the cross‑border effects, which in the case of platform regulation as stated by the previous speakers is really strong stumbling block because you have a very hard time forcing companies to do things where they don't have a legal presence on your authority.
By lowering memory effects when I look at some of the laws being proposed in many different countries to try to modulate what happens over platforms, I see them unworkable on any or all of these criteria and I think it's a useful framework to go forward.
Finally, I would like to put a 43 graphical example of what happens when you try really hard to control something that's so plastic as was mentioned and there's lots of lobbying power, lots of users trying to do both good and bad things. If you try to control these things more and more by (?), it's like if you have to carry a small amount of water in your hand and you try to control it by squeezing it tight. It will spill all over. It's complete let out unfortunate your control. And now you have ‑‑ the energy you have and that's probably one reason for ‑‑ to call for caution, moderation and also let's say technical awareness in proposing forms of platform regulation that can actually do the effects that you want. Thank you.
>> LUCA BELLI: Thank you very much, Alejandro and also for a very rare moment of agreement between Alejandro and Vittorio. That happens once in a decade. So very good.
(Laughter).
Yeah. So let's now give the floor to Shilpa Jaswant who was at the Jindal Global Law School and now at university.
>> SHILPA JASWANT: Thank you. This has been a wonderful journey and I'm really enjoying being part of this forum. And first of all, I think for the past few days and even with ‑‑ I'm realizing that the point of democracy and how platforms have been infringing the democratic process that we've been seeing for thousands of years and how it has been usedz as a tool especially in the regime to, like, make things much worse for people. I really want ‑‑ surprisingly, I have also been thinking that's also been the core part of my research, which is why I want to start my presentation with, like what are the core values that we are here for and what we are trying to talk about. I would like to say these aren't the core values I have defined as autonomy, justice, fairness, justification and freedoms but guaranteed to all of us especially right to privacy and right to work. And I also want to, like, emphasise on the idea of liberal democracy, which he lies on the will of people and the ability to freely deliberate on a policy that state comes up within especially on the policy when it constrains our rights. However, what we are seeing with these platforms that they are in control of all our freedoms, they are constraining, but there is no justification and that is the problem.
And for the past ‑‑ I don't know a decade or so and especially in some of these international organisations what we've been hearing is that NATO is the new oil and we people are the miners and I completely refuse to this argument. And I actually see data as manifestation of our rights guaranteed to us and especially freedom to ‑‑ right to privacy.
And I am also theorizing to the point that right to privacy itself is a precondition to democracy and all the other rights that we have. I called an instance especially when you cast your vote, right? You cast your vote behind a screen because that's where you make your choice. And if that is revealed to the government, then, of course, there's going to be some sense of influence to make ‑‑ to change your opinion to have your vote, which is why I say privacy is a precondition to all your rights especially your right to vote. What's happening right now? The data is behind the black box algorithm which process ‑‑ they collect the data, they process and they process data not individually but in ‑‑ of all our data together to basically profile and target adtize to which we are being sold different service, products and goods, whatever. And that's the kind of business model, but you are getting the best ‑‑ if we do that, we're giving you personalised services, but that is the problem. I was able to ‑‑ I mean, I was tracing that when did it become such a problem that targeted advertising? When did it happen? It was during the '70s and '80s that all these big corporates, you know, when they were not able to get as many consumers as they want, so what they did was they just went around people and asked that, you know, what do you do in your life? They just start collect information. And then they realise once they collect enough information about people, then they can realise, what do people want? And then they can categorise preferences. So after few years, there were behavioural economists who found this theory as (?) preference theory and that's basically what platforms use. They collect our data black and white every day of everything what we do, every movement we make, of everything that ‑‑ of everyone that is around us. And all they do is, like, profile us. They track our preferences, and that's how you get your targets. And that's how they are able to sell their products and services.
Now, what's worse is happening is that it's starting to influence opinions, our choices, the things that we never wanted in our life we are buying. The worse is that it's also influencing the fabric of democracy. We are seeing content, we are seeing misinformation, malinformation, which are ‑‑ which are making people to vote in a way probably they wouldn't vote because they don't know, where, what is the right information or what is the source of that information these days.
>> YASMIN CURZI: Sorry to interrupt. Can you wrap up?
>> SHILPA JASWANT: Sure. I hope I don't have to talk about the evidences Facebook has been doing influencing elections, but as a result, as an outcome, I want to suggest that I'm from India and again we have amazing IT rules and act, but I really want to talk about is in a corporate culpability as a principle that has been forwarded by some of the professors at Australian university, especially Julie Powells and I highly suggest people should start reading about it and professor Ellis bane and Janie Patterson who is also my supervisor and they're talking about the corporate culpabilities. When which talk about corporate culpabilities and their intentions, we, like Monika just rightly said, all these procedures, they're not going to work, which is why we need to go step beyond that. What this theory suggests is we need to assume that there is an intention because that's how ‑‑ that's what their model is, you know? They work on the ‑‑ on this framework on the model that they are doing something bad and they know that if they continue doing it, they can get out of it and they have been doing it. All they get is, like, millions and millions of fines, which they're able to pay off. But then the intentionality doesn't change.
I guess this ‑‑ I think this also makes job much easier for the regulators and the courts, because now we can assume intention. The criminal intention or the criminal misconduct and then go ahead and make claims against these kind of platforms.
So I guess ‑‑ I know there are several jurisdictional issues that, you know, because most of these forms are located in predominantly (?) jurisdictions on this planet. I think it's about what do our states or governments want? Once we do figure it out that what we want do, what our visions are and I think it's not that difficult to use these theories and go beyond what we actually have.
>> YASMIN CURZI: Thank you, Shilpa. I think platform regulations, we really need to talk about platforms, the big techs, they have a bigger GDP than most countries that operate. So we need to also to talk about competition. But now I'd like to call Sofia Chang from PKU university. She's on Zoom with us.
>> SOFIA CHANG: Hi, everyone. I just finished my master's at university. I'm also a researcher and centre of technology and society. And my ‑‑ the focus of my research is very into what China has been doing in terms of regulating digital technologies. So, I'll try to be really, really brief touching upon the regulations or the provisions that China issued on algorithm recommendation management. And these have been in effect since March last year, so we have ‑‑ we were supposed to have already seen some changes.
And the three points I want to make are about their provisions on what they call user rights protection or user empowerment, and also what the management obligations are supposed to be. So when you look at the provisions, they're very, very ambitious when they cover topics related to the protection of minors, the elderly, workers, consumers, saying that algorithms in all systems have be designed considering their limitations and their needs, fraud protections, especially considering the elderly population, workers and for especially those who work for these apps, which for example, delivery apps are subject to algorithm recommendations. And those are consumers, so to avoid price discrimination and things that have already been covered somewhat in other legal provisions in the regulatory framework.
So, they also try to cover, for example, fake news, but this goes into one of the challenging points that I want to highlight, because when you try to regulate based on a type of technology so it's algorithm for recommendation, it's very specific. It's not an umbrella term for AI. It's not a digital services. It's just of the recommendation of algorithm. They go into fake news because some algorithms, they push this information, fake news, but they try to address that by stating that the fake news matter is to be dealt with where only people who have a service permit are supposed to provide Internet news information. So, when you start to analyse these matters from a perspective of what is the government type and things like that, it may become a challenging point when you think about protecting from the mental freedoms and human rights. And another point that I want to make is very similar to what Monika mentioned because across the text of these provisions, you see oftentimes that the company behind the algorithm must present information according to mainstream value orientations and prevent controversies or disputes. So when you think about that, what exactly is a mainstream value? What exactly is preventing controversy disputes when these platforms they want the users to engage in them and spend more time in them and give their attention. So you want the people to be clicking and watching and reading. Are you actually, like, pushing forward the status quo of political agenda or current society values, or are you just enabling tech companies to do what they're doing but pushing so that a legal text is covered and a compliance department will tell you that, oh, we have these algorithms and these internal procedures to remove everything that may be controversial, but we don't necessarily know exactly what is controversial. So, my ‑‑ just to wrap up quickly here, my main concern when we're talking about regulation is trying to figure out, well, platforms are global, but governments and cultural values and notions of privacy and autonomy and freedom vary in many different degrees across jurisdictions. So for example, when I was in China, I did not feel empowered at all by the apps that I was using because a lot of it was so confusing but for my friends, they were very used to it and they said, oh, you can turn this off here and there. So, how can we think of a regulatory framework that covers protection of freedoms and fundamental rights but also consider that the nuances that come from different cultural expectations and governmental autonomy and sovereignty. Thank you for your time.
>> YASMIN CURZI: Brilliant.
>> LUCA BELLI: Fantastic. So we have covered really an enormous set of opinions, crit Ikes, suggestions. I noted there was a gentleman who has a question, so I think as we started with some ten minutes of delay and there is no other session coming, we can have maybe 5 to 10 minutes to complete with a round of comments and questions. Please.
>> Thank you very much. I'm the Chairman of the world summit awards and I have started in Europe the data intelligence initiative, and we are dealing very much with the data regulation side from data act and all the others. One of the things which struck me in the presentation from Monika but also now from Sofia and also from other speakers is that I wonder if there is ‑‑ and if one can agree on hierarchy or prioritization of goals in terms of platform regulation, because I think what Monika was talking about was the environmentally impact, the labour exploitation behind it, and some ‑‑ there are many other things, but if you want to use competition law you're addressing something completely different. You're addressing market power, and you think then if you're addressing market power, you maybe also get a regulation in place, which then will allow you to deal better with privacy issues and things like this. So I think that what I would be interested if a process like our consultation here in the IGF could not have a deliverable at least a sketch or scaffolding of legal regulatory goals and then see what is the appropriate mechanism, and what I liked very much about Sofia's last points is that the idea that as a universality of regulations might be also something which has colonialist aspect to it. I think therefore one has to be saying, okay, what are the principles and what are the implementations and things like this? So I would be really interested if there's work done on this or if people here in this room and others who are associated with it would be interested in engaging in something like this because I would be. Thank you.
>> LUCA BELLI: I see we have also comment from Anita. She has the mic. Let me say it is an excellent suggestion for the next year to have a compilation of not only what exists but what should be taken into consideration because the example of, for instance, labour rights being completely disconsidered is something we have been discussing a lot in terms of content moderation. Content moderation has to be human but then you are condemning people to be traumatised for life because they have to moderate to what a lot of very special characters share on social media. So, it's ‑‑ that is very much something that is totally disconsidered by regulatory base. That is key. Please, the floor is yours ¶
It's a very important point but I also feel there is another dimension to thinking about common benchmarks.
(Anita) I think that pertains to the (?) of most international regimes of the economic legal regime which de facto means that we are not able to pay attention as domestic economies and I think the last presenter was talking about the fact that demonising any nation doesn't get us very far because there are cultural nuances to the way in which people think and democracy is, indeed, as hyper local as we can take it to be and how between the granular and scale we are able to have a conversation. The reason I say that is oftentimes, the capacity of developing countries to even regulate for labour standards is contingent on the way in which they get caught in extremely adverse trade conditions, trade terms and conditions. So, I think conditions of labour should be not regulated through platform regulation but through the ILO or environmental law should be regulated to maybe the biodiversity Convention and other protocols. So we should simply not do this forum shopping because what is to be established here is that the supremacy of the global economy which seems to serve certain nations within international regimes, that is to be challenged. So if we seek to regulate platforms by saying everyone has a uniform platform regulation with a common minimum standard, then we will simply arm‑twist certain countries into adopting regulation that they then cannot domestic Li govern. ‑‑ domestically govern.
>> SHILPA JASWANT: I want to quickly point out I don't know about respect to protecting environmental regulations, but in Kenya, in the month of February, so, there are, I think, Facebook, they have like a big set of people who work as content moderators and because of the horrible working conditions, they were working, I think they have long hours and the kind of content they have to go through day and night, it was horrifying. So ‑‑ and then they put up their issues before Facebook, which Facebook refused to listen to them, so now they are going ahead before their labour commissioner as a class action suit. I think it's one of its kind suit. And a lot of different countries are now following up that, okay, something like this is able to happen. And be on the ‑‑ on the competition part, I mean, just to give you a bit of idea that, you know, the entire debate of protection actually started with competition regime. And now what we have is that we have created, like, different parts of how to deal this kind of problem and now we are ending up with a different kind of problem that if it is data and privacy, you cannot go before competition authorities. However, like, recently, German competition authorities they have taken up privacy claim as ‑‑ I mean, like how these big platforms are abusing people's privacy and that has become ‑‑ and that's how they're abusing their power to force people to give up their privacy. (Captions will stop in five minutes).
All of these divisions are being removed. So I think like I said, it's about the vision and what the state really wants to do, but it's not like it's impossible, but how we interpret these laws, like, we need to see laws like black and white, we need to go beyond the words, what is the purpose and intention behind these laws, I guess.
>> LUCA BELLI: We have very last comment and then we will wrap up.
>> So I was working for Wender management at Google we call them (?) so my only point actually making sure that the conditions of these workers are humane. It's kind of goes to the teams and making sure their management has certain standards, which very micro‑corporate level detail which usually the governments don't have the tools or understanding to dictate how that should work. And my second point would be when we're talking about how governments regulate technology company we should also come up with the framework and terminology to name and address when big tech (?) from the back line, sometimes totarian governments, sometimes the (?) which do not go through the official mechanisms of big tech companies in terms of content moderation. So if we start talking about if we maybe name that, that's not regulation. That's something else. That's an official thing that's really harms the citizen at that point. I think that would be helpful. Thank you so much.
>> YASMIN CURZI: I just wanted to make, like, a really small remark on Anita's comments. Our framework for this year in spite of having international law approach we try to make it not universe ‑‑ from a universalist perspective, we are also drafting it from a relativist approach. Human rights need to be also criticised. We have to ‑‑ sorry. We have to have a criticism perspective on human rights because, of course, it can be ideologically dominant ‑‑ dominated by the rich countries, like we are from Global South countries and you know that this hom OCHAny of ideas can be really used as ‑‑ weaponised against our sovereignty really. So, yes yeah, just a small remark.
>> LUCA BELLI: Okay. I think we have called every extensive range of issues probably almost any kind of issue we could cover in 90 minutes. I would like to thank very much all the panellists for their excellent presentation and the participants for their excellent comments, because we have ‑‑ also had some very interesting remarks and ideas for more work. So, I will suggest if you are interested in reading this document to go on the web page of the (?) share it and give feedback if you want. And we will meet you next year to celebrate the tenth year of the Coalition. Bye‑bye.
(Applause)