IGF 2023 – Day 4 – Open Forum #139 Non-regulatory approaches to the digital public debate – RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> JONATHAN BOCK RUIZ:  I think it is now to start.  It is the moment we'll begin this panel session right here.  Welcome everyone who is attending this in the final day of the IGF of 2023.  This is open forum #139, non‑regulatory approaches to the digital public debate.  Are we going to speak Spanish?
OK, cool.  So welcome to this session.  This is the final day of this year's IGF.  It is a pleasure to be with you all.  First of all I want to thank the organizers of this here event representing the office of the special rapporteur of freedom of expression of the humanitarian commission on Human Rights, thanks also to the representatives of Sweden and the European court of Human Rights that have supported the proposal for this session, and also to the centre for the foundation of the Freedom of the Press in Columbia and the centre for freedom of expression and access to information at the University of Argentina.  I work for the Civil Society organisation working on the intersection of Human Rights and digital technologies in Latin America.  I am coming from the City of Santiago in Chile, and my colleagues are scattered throughout the Latin America region.  Our concern as an organisation is how digital technologies can be used for the exercise of Human Rights as well as they can be a threat to Human Rights when they are regulated or misused by actors both private and public.  Finally, I am going to briefly introduce the panelists by name.  They will be introducing themselves when it is time for their own interventions.  We are accompanied at this hour online by Mr. Pedro Vaca, the special rapporteur for freedom of expression of the American Human Rights of the organisation of American states.  Here on‑site we have ‑‑ senior programme specialist in UNESCO, the united nation's educational science corporation, and the international Human Rights organisation working to protect and promote the rights of freedom of expression and the deputy director for the centers of study and freedom of expression at the University of Argentina.  Thank you all once again for attending this and thank you to the panelists who will be speaking in turn in a few minutes.  The rules of this panel are as follows.  We will be begin with a brief overview of the situation which has motivated this discussion here on what the digital public debate landscape and the challenges to Human Rights are with regards to online expression.  After that, each speaker will have 10 minutes for their interventions.  After that, if time allows, we will have a second round of reactions and participation hopefully for audience interventions, mediated by the moderators here on‑site and also online.  The guiding question that will open this discussion is on the possibilities of non‑regulatory approaches where they can succeed and the challenges they present but to introduce the subject, a few words from the moderator here, because we understand that in the intricate terrain of the digital public debate we have faced for a long time a series of challenges to Human Rights that have been compounded, that have been reinforced, that have been worsened in some cases by events around the world and the failure of both private tech companies and states to fully comply with their Human Rights obligations has had profound consequences affecting democratic institutions, Human Rights, and the rule of law, and with the background of global and local crises in terms of war, disease, authoritarian rule and Human Rights abuses that happen both off line and online are faced with challenges to Human Rights that oftentimes are addressed or attempted to be addressed through regulatory response but because of the presence and the importance of private actors, that is always entails also an interaction with companies that often have more power or more resources than many states.  Over time, we have witnessed the far‑reaching impact of online violence, discrimination, and disinformation in the digital public debate, issues that have cast shadows over the virtual landscape meaning to harm, especially against marginalised and vulnerable communities and groups.  What was once a platform‑promising diverse voices and perspective have seen developments, hostile environments, particularly for traditionally discriminated groups.  Discourse has become polarised, conversations around essential matters and eroding trusts in authoritative sources such as academia, social media, and public health authorities.  So regulatory proposals have come to the forefront at a global scale.  We are have seen that there are efforts by international organisations to provide guidelines, to provide guidance for regulatory response.  We have seen that regional blocks have also reacted with their own concerns but many of these intricate systems have aimed to tackle various diverse different but interconnected issues, including competition, data protection, interoperability, transparency, and due diligence in the digital public sphere.  And while these efforts are critical for responsible behaviour online and for protecting Human Rights, they also introduce complex questions and concerns that demand careful consideration about the balance of rights, about the roles of states, about jurisdictional issues, and the enforceability of the provisions that are created.  One of the pivotal questions that emerges is related to the fragmentasion of the Internet and why regulation is essential for safe guarding Human Rights, it is vital that these regulations do not inadvertently infringe on the freedom of expression, privacy, and Human Rights, so striking a delicate balance in the digital world is a formidable challenge.  Notably many regions and regulatory debates have been in their infancy or have been completely absent especially in many regions in the majority world, and in these concepts, principles of the international Human Rights laws have played a crucial role in guiding the behaviour of companies that mediate online communications.  These principles have provided valuable guidance from automatic frameworks but their effectiveness is a matter of discussion and debate.  So in response to this debate, we are going to speak this morning here about what these challenges are.  Since we have seen the advance of a global regulatory platforms in the Internet in general as a path to address the growth of Human Rights, what are the limitasions of these proposals?
If they have limited effects, in some cases, can present these tensions with the balance of Human Rights, what other policies, what other institutional and legal frameworks have been implemented globally or can be implemented globally or regionally to proper freedom of expression online and make diverse, equal, fair, nondiscriminatory online public debate?
The first word is going to be to Mr. Pedro Vaca of the international commission on Human Rights.  Pedro, please go ahead. 

>> PEDRO VACA:  Thank you.  Good morning there.  I hope you're having a great IGF this year.  Thank you very much.  First of all, I would like to highlight that in the Americas, we identified that the current dynamics of freedom of expression and the Internet are characterized by at least three aspects.  The first one is the deterioration of the poverty rate.  The second is the need to make processes, criterias, and mechanisms for Internet governance compatible with democratic Human Rights standards.  And third, the lack of access into interconnectivity and digital literacy to enhance civic skills online.  This information, opportunities of participation in the public debate and (?).  We understand that diverse and reliable information and free independent and diverse media are affecting this information, violating Human Rights violations requires multinational and multi‑stakeholder responses that are grounded in the full range of Human Rights.  As people world‑wide increasingly rely on the Internet to connect, learn, and consume news, it is imperative to develop connectivity and access to the Internet is an indispensable road to Human Rights, including access to information.  An open, free, global interoperable, reliable, and secure Internet for all, about plus a detailed enjoyment of rights, freedom of expression, opinion, and peaceful assembly, is only possible if we have more people accessing and sharing information online.  Additionally, in the informational scenario of media and digital communication, citizens and consumers should be giving you tools to help them assess the origin and veracity of news storeys they read online.  Since the potential to access and spread information in this environment is relatively easy, malicious actors manipulate it to the public.  In this sense, critical digital literacy aims to empower users to consume content critically, a prerequisite for online engagement by identifying issues of bias, prejudice, misrepresentation, critical literacy, however, should also be about understanding the position of the media technology in society.  This goes beyond the understanding of the intermedia content to include knowledge of the wider socio economic structures within which digital technologies are imbedded.  Here, we have a few questions.  How are social media platforms founded or for instance, what is the role of advertisement?
To what extent is content free or regulated?
Given the importance for the exercise of rights in the digital age, digital media and information literacy programme should be considered an integral part of the efforts and build part of commitments by states to respect, protect, Human Rights and by entities.  Like wise, initiatives to promote journalism are key in facing informational manipulation and distortion which requires state's and private actors to promote the diversity of digital and nondigital media.  On the other hand, the role of public officials in the public debate is highlighted.  It is recalled that state actors must preserve the balance and conditions of the exercise of their right of access to information and freedom of expression.  Therefore, such actors should not use public resources to finance content on site, applications, or platforms that spread deceit and vial content and should not promote and encourage stigmatisisation, which promotes protection of users against online violence.  The state has a positive role in creating and enabling environment for freedom of expression and equality while recognising that this brings potential for abuse.  In this sense in the Americas, we have a recent example in Columbia that urge political parties to adopt guidelines in their code of ethics to sanction acts or insightment to online violence.  In this position, the Court recalled the obligation of the State to educate about the seriousness of online violence and gender online violence and to implement measures to prevent, investigate, punish, and also the Court insisted that the political actors, partners, and movements, due to their importance in the democratic regime are obliged to promote, respect, and defend Human Rights as a duty that must be reflected in their actions and in their ethical duties.  Additionally, the Court ruled that the State should adopt the necessary measures to start a training plan for members and affiliates of political parties and movements on gender perspective and online violence against women.  Considering that lawful narratives are compelled by State actors, paid actors should follow specific criteria in the app market.  Any contracting for content by State actors or candidates must report through active transparency on the government or political party portals that are regarding the value of the contract, the contracted company, and the form of contracting, the content resource distribution mechanisms, the audience technicalisation criteria and the number of exhibition.  On the other hand, to make busy activity compatible with Human Rights possible, the office of the special rapporteur curates that international modalities are responsible of respecting the Human Rights of users.  In this sense, they should.  First, refrain from infringing Human Rights and address negative consequences on such rights in which they have some participation, which implies taking appropriate measures to prevent, mitigate and, where appropriate, remedy them.  Second, try to prevent or mitigate negative consequences on Human Rights directly related to operations, products or services provided by the business relationship, even when they have not contributed to generating them.  Third, to adopt a public commitment at the highest level regarding respect for Human Rights of users, and that is dual reflected in operational policies and procedures.  And fourth, carry out due diligence activities that identified and explained the actual implementation facts of their activities on Human Rights which is called also impact assessments.  In particular, by legally carrying out analyses of the risk and effects of their operations.  In completion, to wrap up, the challenges facing the digital public debate require a multidimensional approach.  As stated before, education, and legal mechanisms, can together create a framework to mitigate harms we face online.  A digital space where freedom of expression and the protection of Human Rights are promoted, fostering a society that values inclusivity, diversity, and respect for all.  Thank you. 

>> MODERATOR:  Thank you very much, Mr. Pedro Vaca.  Thank you for those remarks.  And thank you for for also starting this conversation addressing the need for a multidimensional approach.  This is not necessarily a discussion of the regulatory or non‑regulatory measures but apparently of different types of measures at the same time.  And we will now listen to the rest of our panelists, beginning, of course, with our second onsite participant here, Mrs. Anna buellas, at the freedom of safety journalists section of UNESCO.  You have 10 minutes. 

>> Thank you.  Thank you very much.  It is an honour to share this panel with you, Pedro.  Good to see you.  So as Pedro said, we have a holistic approach to try to deal with this phenomenon.  UNESCO tries ‑‑ this is not my area of expertise, but there's a lot of work done with teachers, with educators, to target potential harm from content and harm from content online.  There's a specific word that is being done to develop resilience in different communities, primarily Indonesia, Columbia, and Kenya, to the social effort peace project which is founded by the narobian unit to aim for measures and develop a way of how issues are happening in those different countries and why the different contexts and issues related matters that allow this harmful content to be spread.  And there's another action that has been happening that relates to capacity‑building on different stakeholders through the barriers, such as judges, parliamentarians, regulators, in order to understand that when building a potential harm for a content, there's a name for freedom of expression, access to information, and diverse cultural content.  And there's word bond also through the cultural sector in order to understand the impacts of harmful content in artistic expressions.  And the last thing which I think is also important is that we also have another action that is related to policy and guiding member states in the process of admonishing that awareness of digital platforms requires as Pedro mentioned, to safe guard freedom of expression, access to information and the various content while balancing and addressing the phenomenon of this information of hate speech and propaganda.  So in this session, I will focus on two main specific projects that UNESCO has been putting forward.  I will start with the social media for peace project.  One of the projects as I said has started in four different countries and allow us to understand what is happening with content moderator and how we see it affecting different communities and also how another approach can be successful while it's holistic with different types of solutions.  So the first thing we learn, we think the social media for peace project, is that context matters.  Which means that when it comes to content moderator, it always cannot be just let it slide.  There's specific languages in different regions that are important to understanding in order to address content moderator issues, and this is not happening in many countries or many of the countries that we're working on.  That is specifically also countries that are in crisis or that come from crisis.  The difficult thing that is important that we found is that despite acknowledging the crisis, despite the lack of knowledge and context annuances that their platform can understand and create in an online world, there's a problem of not considering these countries as a priority and then not providing enough funding for the development of content moderation measures.  So companies have a specific priorities to those countries that have global impact or that represent the market share that are important, and in those countries that this is not happening, they are not putting sufficient project to them, and then this is increasing and creating more problems.  The social media for peace project also understood that when dealing with these problems, the most important thing to bear in mind is to have the capacity to dial up between the different stakeholders, acknowledging that in conflict zones, there are many issues that should be like that were happening that have to be considered in the online world, so that's why due diligence from the platforms is very important, understanding the context, developing risk assessment and identify the specific mitigation measures that they have to put in place in order to reduce the specific risk based on the context is very, very important.  But while doing this work, and I want to say, there was two main approaches.  The first one is the fate on the companies to tune their economic interests on how content moderation was doing, through the public interests of making people know and reducing back this content that many times it's also a harvest through advertising, as has already been mentioned.  So that's the first question.  Are we keeping the faith on changing or shifting economic interests to the public interest from the companies?
Many people are still believe in this country that this can be one of the approaches to push for companies to increase their projects in order to do better content moderation, and then have a safer space.  Then there's other approach which maybe comes from the states that has already comment, which is try to reduce this phenomenon, there's not safe guard for ‑‑ and does not touch the companies that are only solely responsible for powerful content is specifically the user.  And that is another approach.  And then UNESCO, after the work that is being done after the social media efforts, saying, OK, we are not acknowledging that these are the two different approaches.  What we need also is to start our debate that allow us to understand if it's possible to balance freedom of expression, access to information, and access to diverse cultural content with ‑‑ while building with potential harm from content such as this information, hate speech, and conspiracy theories.  And while doing this, UNESCO started a consultation that led to more than 10,000 comments that came from the engagement of people from around 134 countries.  And what we learned is that when government systems are transparent, have check and balances put in place they allow content creation and moderation to the UN principles of Human Rights, when they are accessible and inclusive to diverse expertise, and taking in mind the cultural content, then it can be a game‑changer.  So that's why UNESCO started developing these guidelines for the awareness of the digital platforms that on the one end, enable the freedom of expression environment, that such as Pedro has mentioned, had specific requirements for the governments to commit not only to freedom of expression online but also to all of their duties in respecting and promoting freedom of expression off line.  And the second thing is that UNESCO acknowledged that creating a system requires the acknowledgment that any regulatory measures that has to be coherent and comprehensive with the different kinds of regulatory arrangements should be stakeholder‑approach.  Which means there's no only statutory regulation that depends on state and companies but there should be a participation, and I think participation of other stakeholders in the whole regulatory process, meaning the development of the regulation, the implementation, and the evaluation of the regulation.  The third thing that the guidelines state is that companies have to comply with 5 K principles.  One, due diligence which specifically state that companies have to develop different Human Rights risk assessments when they are developing new operations, when they are creating new cooperations, create new ownerships, develop new products, they have to do it prior an electoral cycle.  This is very important considering that 2024 is a super electorate year and at least 3/4 of the population that is able to vote will come to vote on 2024.  The third is that companies should develop Human Rights assessment when it comes to crisis emergencies and armed conflicts and they have to understand the different risks that the companies or the content ‑‑ the content that the companies pose to specific communities such as journalists, such as environmental defenders, such as artists, or other marginalised communities.  The second principle is transparency.  I don't have to go very deep into it.  The third is a continuity.  The fourth is user empowerment, which means that we think the governance system should be specific programmes that are developed to for literacy and the fifth is all the actions to the guideline principles.  So this is a work that so far has been done.  We definitely believe as Pedro said, and we state, that this is our holistic approach and that no action should be only and one only because if they don't come together with many other actions that relate to, yes, education to yes creation of communities, yes, (?) Then these different phenomenons will not be targeted.  Thank you. 

>> MODERATOR:  Thank you very much for that extremely informative intervention with all of the initiatives that UNESCO is carrying out, including trying to provide guidance for regulation for governments in a manner that has included many rounds of consultations and a proud discussion as you mentioned with the thousands of comments from the world over which of course as you have been mentioning also enriches the learning insight of organisations itself and how to address many of those issues from the perspective of freedom of expression, access to information, and access to diverse cultural contents which I think is a key factor in all of this and sometimes not necessarily addressed specifically, so thank you very much for that.  But now can you please tell us about your own view about these subjects?

 

>> Can you hear me?
Thank you very much.  I will try not to be repeating too many points that have been made by the first two interveners which are obviously excellent and extremely relevant that we need to look at the whole tool book, the regulatory and non‑regulatory approaches.  Perhaps just very briefly, I think this discussion is very important because we do agree that many of the proposals that we've seen or legislation that has been adopted recently that was seeking to regulate platforms has indeed ‑‑ there is indeed a danger that these will do more harm than good because they talk a lot about holding platforms accountable but at the same time very often what they do is not necessarily focused on the business model of the platforms, on the data tracking, on the model, but also on the platforms to observe more control over in fact user speech so the focus goes from the platforms on systems to the speech of users, and it is critical that any regulatory framework that has this strong impact on freedom of expression, that it is seriously grounded, and that it is evidence‑based and, of course, grounded in the principle of legality, legit please, necessity and proportionality as Article 19 requires.  This is also why working more or less globally, it depends also on the jurisdiction what sort of solutions we think would be appropriate for many governments we would not advocate.  Although in principle, we think sound regulatory frameworks should be in place.  With many governments, we won't start to advocate for passing legislation that would control platforms because we do feel, of course, that it will be not a regulatory proposal that will be respectful of freedom of expression but give the government more options to control online speech, and also article 19 has long advocated that it's important to take this competition angle as well because there are very few dominent players in this field.  They are gate keepers of these markets, and they are also really gate keepers of our freedom of expression online, and we do strongly believe that decentralisation can, per se, have a positive effect on freedom of expression, more healthy competition, more empowerment for users, for example, for user things.  I do not want to be on a certain platform because I do not think that they respect privacy enough.  This is important for me.  They should be able to leave that platform and still be, for example, connected to the contacts and families that wish to remain on that platform.  As has been mentioned, the UN guiding principles can be a very important tool, of course, are an essential tool that we advocate for platforms to take into considerations all over the world really, so whether we have a good regulation in place, bad regulation in place, or no regulation in place at all, that should always be the benchmark against which they should operate and a lot has been said about them so I won't go into details.  Also, in terms of because we're also talking about risks of the different approaches, we think if we take the approach that enabling responses are also at the centre of this discussion then we think that the risks to freedom of expression are also limited.  This is also linked to another observation we made.  Discussions seem to say that the social media platforms are the cause of the problems and we do not deny that they have exacerbated certain societal tensions and increased polarisation.  There's no question about it.  There is enough evidence this is happening.  At the same time we think it's essential to look at the root causes, for example, of this hate speech, online gender violence and this may include certain regulation of the platform's business model but it also needs to look at very different areas outside the specific digital space.  So for example article 19 has published a couple of years ago a tool kit when it comes to hate speech where what those different approaches look like, where we also again need to look at regulatory and non‑regulatory responses such as antidiscrimination legislation, public officials, as Pedro mentioned should not themselves engage in stigmatising discourse or count to such discourse when they encounter it, they need to receive ‑‑ public officials need to receive equality training, independent and diverse media environment.  All these aspects are obviously key that we have say offline, so to speak, an environment that is also inclusive that will not translate into the more extreme speech online and, of course, civic space, a strong civic space, strong civic society initiatives are also a key component of that, and also to mention on what Christina said, article 19 is a partner of UNESCO when it comes to the social media for peace project and there have been a number of research reports as alluded to that have really found the failings of the platforms, again, taking into account sufficiently the contextual elements.  It starts from Human Rights teams that are not in place for many countries, so Civil Society in many countries, they don't have anyone to call at meta, for example, if they say there's a video that needs to be taken down or that we see there is an election coming.  We see there's a crisis online.  There's not anyone who might necessarily be able to talk to or who would be responsive.  Obviously very important.  Additional programmatic element is the use of automated content moderation tools as well because they exacerbate why we recognise that content moderation cannot only happen through reviews.  It's also true many of these tools are not sophisticated enough and might not ever be to really make a proper assessment of some very complex categories of speech.  Even for a court, it can be very complex to make a judgment on, you know, was there really hate speech?
Was there the intention to incite hatred?
Was there disinformation?
Was there an intent to publish false information and disseminate it?
Was there an intent to cause harm?
Obviously doing this moderation at scale can present very serious challenges and we always call for more human reviewers that are native in the languages that they moderate.  More local Civil Society organisations need to have direct access, meaningful access to the platforms because we also know that there have been these trusted partner programmes which have not always been very satisfactory to say it mildly, and Civil Society has often found it's a bit of time and a waste of resources, and the impact is limited.  Perhaps because I know we are far advanced in time, I want to make a final reflection.  I think an interesting trend we are seeing now is also which is non‑regulatory trend but also based on regulation but a trend against online platforms.  Prominent examples have been against the US Supreme Court cases where families of victims of terrorist attacks in Turkey and in France have filed suits against Twitter and Google, for example, saying that their systems have failed in a way where they have enabled terrorist content to spread online and have also sort of aided in these terrorist organisations.  We have also had other litigations happening in Kenya over the violence, the violent content that was spread in Ethiopia that was moderated from Kenya and also over the failings in Myanmar.  A strategic litigation has been brought.  That in itself from our perspective has some challenges because from a freedom of expression perspective, organisations have always said that it is essential that platforms do remain largely immune from liability for the content that they host but at the same time of course they need to be platform accountability, and there needs to be remedies if they infringe on the Human Rights of the actors in the respective countries or effective communities in the respective countries.  So here's where it will depend on how this litigation is brought.  We do not want to see courts saying, after all, you need to be held liable for hosting terrorist content because it has led to a terrorist attack.  At the same time, it can be very interesting if we're starting more litigation that focuses on remedies for failures to conduct these Human Rights impact assessments to take Human Rights due diligence measures and to do the mitigation measures properly, so I do think that it is a trend that we see that as a lot of publicity, and there's a lot of bad reputeasional aspects linked to the platforms and also could be pressure for them to essentially get their act together as well.  Thank you. 

>> MODERATOR:  Thank you very much for offering so many different pathways towards what we expect to see but are so difficult to achieve to the platforms that speaks to the role they have in solving social media problems even though they might not be creating them according to some discussion and some views.  So now, your term.  What frameworks have been implemented or can be implemented beyond just the regulatory ones to address the problems that we have with online speech?

 

>> Thank you very much.  Should I introduce myself.  (?)
I don't want to be too repetitive of things that have already been said.  So let me just offer you ‑‑ I think a diagnosis that we have in terms of where we are, and also to highlight a few tensions that I think underlie our discussion and are not yet ‑‑ have not yet been resolved.  The old doesn't die yet, and the new is not born.  So we are at that moment in which we are sort of in between the old and the new, and that's always interesting times to be, and it's also challenging.  I think we are clearly moving towards the regulatory moment, so in a way, the question that has been posed in this panel, I think it's more or less intention with the trend of where the world is going.  I agree with everything you just said, and I agree that regulatory and non‑regulatory measures are important, and they should take place at the same time.  But I think we are moving towards a regulatory moment.  Europe is obviously the ‑‑ what will most likely be a model that will expand across the globe.  We have already seen copy cat legislation ‑‑ not legislation but bills presented in Congresses in Latin America that hasn't been adopted yet.  But, you know, legislators in other countries look at and copy the language and copy some of their provisions and that is a process in and of itself full of challenges.  We have also seen calls to release section 230 in the United States because of Congress and its grid lock, it's difficult to imagine that comprehensive review of section 230 will happen anytime soon but we have seen states‑level legislation that has been passed imposing on platforms' obligations.  We have already seen strategic litigation against companies but not in the direction that you mentioned.  In the opposite direction.  Like for instance, the cases in which they basically say that the kind of relationship that has established with companies in the US violates the first amendment.  In a way, litigation cuts both ways.  So it could be a litigation that questions companies for failing to stand up to their Human Rights standards but it could be also litigation against companies for violating the first amendment in the United States.  So I think that's where we're going.  It will be interesting to see how we get there.  In terms of alternatives, of course, the commission has supported alternatives for a long time.  I was part of the 2019 process of discussing the guidelines this information in the electoral context and the outcome was to support regulatory measures.  I'm not going to repeat what you guys just said but literacy of course is very important.  I would like to highlight though that literacy initiatives are in a way a bet on an old principle, that he was very cherished in the Human Rights and expression field.  To that extent, it is our responsibility as democratic citizens to figure out what's fate and what's not.  The Internet makes it more difficult to exercise that responsibility.  But in a way, I would highlight and underscore that those kind of initiatives are made on that principle.  We have yet to announce it.  And, of course, all kinds of measures of counter speech are obviously very easy.  They are not threatening from a Human Rights point of view, and they are fairly easy to implement and apparently quite successful, especially where it's seen most successfully deployed is counter speech to combat in elections in Latin America.  But again, calls for regulation have been happening.  Has been very strongly supporting the kind of regulations on paper and looks very good and looks respectful from the Human Rights standards.  Same thing with guidelines.  The same risk that is involved in these initiatives is something shanital already mentioned.  Even good legislation on paper could do more harm than good and I think this has to do with in many countries sort of a lack of an institutional infrastructure necessary to adopt these kinds of regulations.  That obviously is a concern for activists, but as I said before, I think we're moving in the right direction, and we'll have to deal with that as the time comes but I'm pretty sure in the next couple of years we will see legislation being passed outside of the European union, and we will have challenges in that sense.  Now, I would like to highlight a couple of underlying tensions to close my remarks.  So for instance, we have been discussing the importance of disintegration, and antitrust legislation, which for practical reasons, it will happen where corporations are incorporated or places where they have important marketplace presence and where they have the kind of institutional infrastructure necessary to move forward with this process.  There is ongoing litigation in the United States against Google.  There is at the same time investigations in the European Union.  It is hard to mention that Latin America countries could move in that direction but I think that's important.  Now, it seems to me this is the intention with the, I would say, framing of the DSA or the framing of the regulations that are being proposed because to an extent, those kinds of regulations depends on a few powerful intermediaries.  So if we would let's say break and fall apart and have an Internet that is extremely centralized as it was in the ‑‑ towards the end of the 1990s and beginning of the 2000s, I don't know how that would be compatible with increasing control even in a way that is respectful of Human Rights.  If we have truly the centralized web in which people get to choose, a lot of people will choose hateful content.  A lot of people will choose and engage in discriminatory content, and if it is truly decentralized, there will be nothing in between so I think that's an underlying tension that to an extent speaks about I think a really deep and profound disagreement in the field of Human Rights in terms of what kind of future are we imagining as desirable.
I mean, this is something that I think is there that it's underlying, and I think we'll discuss it as openly as we should.  You know, are we waiting to support freedom of expression in the form that we have affirmed it in the 20th century where we formally rely on gate keepers to keep check on that?
Are we embracing the centralized promise of the Internet of the late 1990s, and that means a lot of speech rights, problematic.  It's harmful.  I think there's still a lot to figure out in terms of evidence.  A lot of speech called harmful, we just don't have enough evidence to support that it is actually that harmful but I think that underlying tension is there, and that we should keep it in mind and we should discuss it more openly.  Thank you. 

>> MODERATOR:  Thank you for your sobering remarks and also highlighting one of the trends that we see towards regulation even though we can discuss other forms of addressing some of these challenges.  So I want to first check whether we have hands in the room that would like to pose any questions.  Otherwise, we will be starting to close this panel since time is running out.  Before we do that, I would like to pose the question myself, I see no hands.  It will be to the panel itself beginning with Pedro.  I don't know if you are there but it will be a rapid round of one challenge and one opportunity we have.  If there is a future in which we will see regulations that will come, one challenge and one opportunity that we may find in non‑regulatory approaches that can be taken today, as soon as possible, among non‑governmental actors in order to provide for the Internet that we all want and for the platform responsibility on Human Rights that we would expect.  We will go in the same sense that this panel began witha up to two minutes.  Please, Pedro, you go first. 

>> PEDRO VACA:  Thank you, Juan Carlos, and thank you for holding this amazing conversation and a lot of conversations.  The challenge, I think the challenge that we have faced is the lack of capacity in a lot of member states.
I mean, we cover the Americas.  We monitor 30 different countries and at this moment October 2023, we do not have enough capacities, even knowledge among member states to be part of the conversation.  So I think we have to develop contact points at the foreign affairs ministers (audio cutting out)
we only have powerful countries with the capacity to then we have opportunities to deal with each other.  And the opportunity I think why I highlighted the constitutional court of Columbia.  I think we can pool all our efforts in the user and the consequences for the user, or we can also prioritise the role of public servants and political leaders.
I mean, if you have arrests in a city, you have a problem.  If you have leaders that recognise discrimination, you have a bigger problem.  As points of reference for society, democracies should and could frame a better way what is and what is not allowed at that level of representation.
I mean, the frame of freedom of expression of people that wants to become or wants to participate in the political sphere is limited compared with ordinary citizens, and in that specific opportunity, we have a lot of interAmerican and international standards so it is something that is not even (?). 

>> MODERATOR:  Thank you, Pedro.  I'll ask also to the rest of the panelists, first Christina, one challenge, one opportunity.  (Off microphone)

>> That the discussions focus a lot on how legislation will look like and not how the second step of the process would feel, so I've been saying this in the different manners I have participated, is the idea is that many regulators have said once legislation has passed, no one cares about it, and they leave us alone.  And as mentioned, there's many regulatory authorities that do not know how to deal with this issue and that are not used to talk with the Civil Society so we need to break that tension and to be able to create a conversation among them, so that will be another opportunity.  And an opportunity also is that since companies are based in the same country, what we see is that countries, stakeholders in different countries, in different regions, for example, in Africa, an African union are coming together saying, OK, countries don't care about one of our countries, per se.  They don't have a specific interest in X country but what they care is asked together.  They are getting together with Civil Society, with electoral management bodies, with the African union.  They are coming together with the different stakeholders to go before the companies and say this is what we need and this is what we wanted.  With that said, that creates a great opportunity because between 40 countries, you have companies that actually believe that a Human Rights‑based approach is the way to go through and other countries that do not believe so.  But there is a balancing process but for me it's a great opportunity. 

>> Thank you very much, shantal. 

>> Thank you very much.  In terms of challenges, society tends to move slow.  Regulators tends to move slow.  Technology doesn't.  We are reaching this trend where they are trying to catch up.  There are a lot of initiatives that are a lot in the European union itself, for example.  There's the AI sector, digital market sector, political advertising sections, and there is a challenge for Civil Society active in this field or ready to be able to catch up with everything and cover everything, and not to mention, there are a lot of Civil Society actors that are very much impacted by what's happening in the digital space but are not necessarily experts in it or experts in content moderation or experts in, for example, women's rights, and those are quite technical subjects, so it requires a lot of expertise.  So I think this is one of the main challenges of the expertise that requires and the capacity that it requires.  I think the opportunities, we do feel that there is more recognition from say some of the platforms, some of the regulators that many of the issues we're dealing with Civil Society are experts in it as well.  They seek more ‑‑ there are more consultation processes to extend the opinions of Civil Society to take into account another point.  But we do feel there is more, again, appetite from platforms of regulators to help us engage but at the same time we don't want this in a way where they outsource their own responsibility and say we don't need to deal with the aspects of Human Rights society.

 

>> Thank you very much.  You have the last word. 

>> Very quickly.  I would say the following.  I think one of the biggest challenge is that to move forward in regulation on nonregulatory measures, we have to do it generally in context of deep polarisation and that it's always very difficult.  But at the same time, I think that context offers an opportunity because I think that in most democracies around the world, there is a need to rebuild the public sphere and civic discourse.  There is a need to start talking to each other in a way that is respectful.  And even though that is difficult precisely because of polarisation, that underlying need is still an opportunity.

 

>> Thank you very muchAnd with that, our time is upThank you very much to my fantastic panelists and everyone who has attended this session and have a nice rest of your IGFTake care, everyone.
[applause]