The following are the outputs of the real-time captioning taken during the Fourteenth Annual Meeting of the Internet Governance Forum (IGF) in Berlin, Germany, from 25 to 29 November 2019. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record.
***
>> MODERATOR: Good morning, everybody. This is our session workshop number 401 on Inclusion Online, Diverse Knowledge, New Rules? Thanks for joining us. We have had many registrations. I'm glad some of you also showed up. Thanks for being here, for taking the time. My name is Jan Gerlach, I’m at the Wikimedia Foundation, the non‑profit that hosts and supports Wikipedia and several other projects of free knowledge that people around the world can freely participate in and share.
I am very happy today to be joined by wonderful colleagues over here which I will introduce in a minute. Maybe for a brief agenda today, we will briefly or I will briefly frame the topic and introduce our speakers. We will then have a discussion of roughly 50 minutes followed by an open Q and A for which we will take questions from the room, but also from remote participants hopefully. If you are participating remotely, please feel free to type in your questions into the Zoom app. We will park them for later for the Q and A, but you can go ahead and type in your questions that you have for the speakers. And finally, we will find some conclusions and wrap up the session shortly before 1:00 p.m. so we have roughly 90 minutes for the session, which is great. It should allow for in depth conversation here.
To briefly frame what we were envisioning is the concept or proposal for this session is to really explore potential governance responses to the diverse interests of new groups and new roles and responsibilities of different sectors for the people with who are coming online, we can call them newcomers. We can call them people who are the real digital natives.
This is about basically finding ways to allow people to meaningfully participate even if they haven't been part of rulemaking and policy making until now. So topics that we can explore here are maybe a need for flexibility of such norms, so newcomers can contribute and shape them also in ways that are accommodating of their diverse needs, a possible topic is the tension between freedom of expression and inclusion if there is something like that, self‑governance, citizenship and participation.
I'd like to touch upon diversity and youth as a special topic, gender as well. And possibly if we find the time, the role of languages and also scripts and how those influence governance and policy making for people around the world that may come from different backgrounds. From the Wikimedia perspective. Unfortunately one of our panelists from Wiki media can't be here so I will briefly share our perspective even though I don't like to do that as a moderator to intervene too much so bear with me. From a Wiki media perspective, we believe everybody around the world should be able to freely participate in knowledge and be able to contribute not only knowledge but also to the rules that govern knowledge and how it can be shared and read and expressed online so we believe just connecting the unconnected as we hear often at conferences like the IGF isn't enough. Our goal is to make sure that people have meaningful access, can meaningfully participate in Democratic distributed ways, that they can feel safe, and share their knowledge because we benefit from them, all of us do.
So I'm really happy that with this background I'm really happy that the following four speakers that are sitting to my left here are joining us today. I will just go through my, I think, actually alphabetic list here. On the one hand, I have Debora Albu from ITS Rio in Brazil, program coordinator for democracy and technology at the Institute for Technology and Society in Brazil. She holds a master in science in gender and development.
Next is Santiago Amador who is an innovation advisor to the mayor of Bogota. He holds a Master's Degree in Public Administration from Harvard and a Master's Degree in Social Science of the Internet from the University of Oxford. He was the National Director of Internet policy at the Ministry of ICT of Colombia and currently is coordinator of the innovation lab for public services at the Bogota mayor's office and Professor of public innovation in the National School for Public Servants.
Joining also is Chenai Chair. She has extensively focused on understanding demand side issues with regard to digital policy from gender and youth perspective. She is currently member of the IGF MAG in her second term.
Finally, we have Amos Toh, who is the senior researcher on AI and human rights at Human Rights Watch. He was previously legal advisor to the UN Special Rapporteur for Freedom of Expression. Thank you for joining us. For this conversation, I hope to take a relatively light approach to moderation and Amos called it an anarchic approach, but I hope for a lively conversation. Feel free to ask each other questions here.
And if you in the room, please make a mental note and ask during the Q and A unless it's super pressing. So I'm interested in exploring the questions that I outlined earlier, and maybe we need to first take stock of the rules that really govern participation online. Just for ten seconds here, we have, of course, international treaties, we have laws, we have regulations, but then we also have terms of service, terms of use of platforms and, of course, also Internet service providers and also app stores, but we also have community standards within platforms and maybe within those platforms, different fora may have their own moderation rules.
So there is a whole sort of network and it gets more granular of rules and norms that govern how people can participate online. And it's important to keep in mind that we are not just talking about laws here, not at all actually. So my first question maybe to you, Chenai, is who actually is coming online. What are we seeing? Who are the regions and groups that are coming online today and are newly connected or maybe have connections but are just finding the ways to be part of the online world.
>> CHENAI CHAIR: Thank you so much for starting off with me. It's always fun to set the scene for everybody else. When you think about who is coming on line, the International Telecommunications Union released its report to show the access A. and what you find is almost everyone is connected, almost everyone has some form of access to the Internet, whereas least Developing Countries and Developing Countries are the ones that are still coming up when it comes to level of connection, Africa has lowest percent according to the ITU standing at 28.2% compared to Europe with 85.2% of Internet users.
But take it down into a lower level you find that South Africa is probably one of the leading countries when it comes to Internet access with almost over half the population making use of the Internet. And then in terms of really answering your question from who is actually common online, you would find that it's people where some form of access to economic income which are in the ranges of 17 to 35 because now they have access to work opportunities and are able to buy a device and able to afford buying the necessary data for you to allow to come online.
And then what we do understand and what is agreed upon is we do have a gender digital divide. So what we find is there are more men coming online in comparison to women, and then when we do break it down at that, because these are not homogenous groups you will find that urban men, men who are in urban spaces and have access to high level income are the ones that are leading the charge.
Then you take it to a rural and urban, you find that it's more likely to be urban women in comparison son to rural woman who are coming online. I haven't done as much age work, but you find that it's that range I was talking about of the 17 to 34, 35 are the ones coming online and then as you move up in the older range you have fewer and fewer people coming online. Those are the demographics we have to take in account. The age thing is also a thing because least Developing Countries have the young of the populations so that's the context that we have seen younger people coming online but it's mainly men who are leading the charge. Thank you.
>> MODERATOR: Thanks, Chenai. And maybe since we touch upon youth at the very end of Chenai's framing here, Debra, can you maybe talk to us about what do you see in your research around youth coming online, obviously a subset of the newcomers if I may use that clumsy framing. What do you see? What are their needs and how can they participate?
>> DEBORA ALBU: Thank you, Jan, for the presentation. Thank you also for organising the session. Just, I think it's interesting the way that Chenai was putting it. She was moving along all of these different demographic categories. What we see is maybe an important concept to highlight, which is the one of intersectionality here.
So the more we have layers of oppression that might exclude and exclude and exclude these populations from coming online, we can actually understand that it is not the same thing to be a young woman in Brazil as it is to being a young woman in countries of Europe to put it in a more general way.
So this is maybe a first idea or concept to take into consideration when we are talking about young people. Recent research from ITU, from UNICEF, et cetera, young people are coming online, but there are where barriers to how they are coming online. Maybe one of the first ones is to talk even about the possibility for them to have devices to access the Internet.
How can young people access the Internet and buy devices if they are not affordable. So affordability here might be one of these first barriers. And once even they have these devices, how can then they move forward to actually purchasing data packages and accessing good Internet quality. So that's maybe a second layer that even makes it harder for young people to access the Internet.
And maybe a third and here just to put it a little bit as we move along the conversation is the idea ever parental mediation as well. Many young people don't have access to their own devices but do use their parents' devices to access the Internet. So how is there access to Internet information, knowledge online mediated by their parents?
So I think maybe just to start these are a few barriers that we can underlie in terms of talking about young people online but I'm sure we will move on and talk about other barriers as well.
Thank you.
>> MODERATOR: Maybe looking at you Santiago, we have had a conversation about rural areas in Colombia coming online, and calling you apparently. Again, the background or what we just heard from Chenai and Deborah, maybe moving away from the youth chapter, topic just a tiny bit. Who are the people who are coming online in rural areas? Is there anything that we can learn from Colombia here? The.
>> SANTIAGO AMADOR: Could you switch to the presentation, please? I just want to use one slide. I think it's enough. This is the typical discourse that we politicians give in a rural area when we are deploying computers and Internet. We say exactly this. I as the Director of Internet Policy is there and we remember to say something very similar to this. Thanks to the Internet, every citizen can access from any place to all of the information they need and use it to improve their lives and participate freely in global discussion. So this is a common discourse of a politician but if you see in detail not every citizen is accessing, not to any place, not to all information, not necessarily they are using the information to improve their lives and they are not participating freely and they are not part of the global discussions.
So I like to do this exercise just to point out that there are a lot ways for people to access the Internet. Language is one of them. In a country such as Colombia it is very difficult that people who live in rural areas, they don't know how to speak English and the content is produced in English, for instance.
Connectivity there is, I mean, the speed of connectivity is very low, so they cannot access to, for instance to videos or to higher location, it is very difficult for them. Also, I mean, we are studying gaps or digital divides and there is not just one digital divide. We found seven different digital divides, and the one, I mean, the worst of them is a gap of intention. It is very different the intention to use the Internet when you are in a city and when you are well educated and you are maybe young that the same intention when you are in you rural areas.
So basically sometimes they don't have the interest or the intention to use the network or to use the Internet to actually participate, might be because they are not literate, I mean, they are not writing and reading properly or in the way that the discussion is required. They are not bilingual. They don't have access to credit cards, so they cannot pay anything in the Internet. They cannot access to those documents, behind the pay wall, for instance. All of the social divides amplifies within the Internet.
>> MODERATOR: Thank you Santiago. I really like the notion of the intention gap. Let's maybe come to that later, come back to that later. Amos, turning to you, I think you saw this coming, what are the rights of all of these groups that we are talking about now? What are their human rights to participate and how does that look before we go into more detailed conversation with the challenges and how we can redesign maybe norms?
>> AMOS TOH: So I will respond to your question, but first I wanted to add a bit to some of the really interesting discussion on barriers ting assess online in the sense that some of our research, particularly on digital welfare systems show that people who are experiencing barriers to coming online are nevertheless forced online to access essential public services, and that's really increasingly a problem. We see that particularly in the United Kingdom where the flagship of our program known as universal credit is effectively forcing some of the poorest and most vulnerable people in society to access their benefits online even though they don't have the resources as people have explained here to do so or necessarily the requisite literacy to kind of participate in that way online.
So that's kind of a another dimension to this debate. So then that brings me to your question of the range of human rights. So quite obviously, you know, I think freedom of expression is affected and particularly access to information. A component of freedom of expression that's not really talked about is freedom of opinion. To the extent that not just the way we speak but the way we think, right, is affected by issues of access or being forced online, or what we see online, that's also kind of an under explored issue, but I do think that there is also kind of social economic rights here that impact it, the right to Social Security, but also many other related rights.
>> MODERATOR: We briefly touched upon challenges and barriers. When we look at, let's say, new regions where the next billion, the unconnected currently are coming online, maybe they are already connected but they don't have the means to fully participate. What are the barriers that you see for participation in discourse? So say somebody actually has a device. They have access to the device. They may have also the economic means to spend time.
Assuming we have overcome all of those barriers, do the newly connected actually find an environment to the norms we have currently on platforms such as maybe Facebook, Twitter, others, afford them to meaning fully participate Chenai?
>> CHENAI CHAIR: I have two examples. The first one is maybe to quote Alison Gillwald. She defined it as the digital paradox online way. There is a difference between those who have certain levels of access to education and income and how they engage online versus those who now do have the resources to engage online but lack the kind of knowledge in terms of the critical thinking involved with participating online.
That's one framing to it. The two examples I have are my favorite which I picked up when we were talking about women's safety online from South Africa "men are trash" is a very important political call to action in terms of violence against women. However, women find themselves using this term on Facebook and then being kicked off the platform or being reported for misbehavior on the platform.
So there are community guidelines that have been set up, but who exactly are they protecting. So in engaging with Facebook, they have pointed out that one of the lines against discrimination is that all genders will be respected and everyone may have access to the platform, but if this is my rallying call to action for something that is of context relevant, then that means that as a new person coming online I'm likely to find myself in Facebook jail for a peered because my favorite call is "men are trash". So that's one example.
I think in terms of setting up community guidelines and the barriers that then affect people was rightly mentioned around the freedom of opinion. While of the we are championing some of the had rights for people to engage online, to what extent do we go in offline spaces. In offline spaces it's easy to walkway from a conversation, but in the online space you will get, a screen shot is it going to be done and the point will constantly come back to us, especially if you are someone who occupies a position of power. Generally a woman, whatever you say something is going to come back to it. So the perceptions around what is a relevant freedom, what is the right guideline, who does it benefit at the end of the day and who designs that to say listen this is the community guideline that needs to be set up to protect people on the platforms and then how is it communicated back to the citizens who actually come on these platforms.
I'm not sure if Facebook has orientation for when you sign up for an account this is how we expect you to behave. But most of the times it's when you have access to spaces like these or if these communities actually do then have a session when we come and hear what the guidelines are, it becomes my responsibility to go back to say if you want to publish something on Wikipedia, this is what you need to follow.
So I think those are the barriers in terms of skills, capacity and engagement around what is the right behavior online.
>> MODERATOR: Thank you. So maybe if I understand correctly, there should be almost somewhat a mentorship for those people coming online. I mean, here are the rules, right, take people by the hand. I mean, in a hopefully not so paternalistic way, this is what you are expected to do. But do those rules that would be communicated right now actually help everybody around the world? It's probably already almost blasphemy for this Conference that runs under the motto, one world, one net, one vision, but I want to ask the question, do we actually get to a place of participation for everybody with one set of rules for everybody around the world?
And if I can sort of toot Wikipedia's horn, Wikipedia language versions around the world, there are 300 language versions, and they are all have sort of common sets of rules but they do differ because people negotiate those rules for conduct and content within their language communities, and they follow sort of a common set of principles but they are different for different language communities.
Is there room for that at all for other platforms, Amos, if I can ask you that?
>> AMOS TOH: I think this kind of leads into discussion about like some of the incentive structures that are set up by platforms that are, you know, not run by volunteers. And I do think that some of the incentive structures are just aligned with the kind of ground up development of rules that, you know, may be, may have come off contrived in the Wikimedia, Wikipedia context.
So one set of rules that I don't think we necessarily talk about enough commission to community standards and guidelines is advertising friendly content guidelines. Which determines what kind of content is monetized and demonetized and financial incentives is a huge incentive for what content is being uploaded and? Some ways recommended and prioritized on these platforms.
I think we won't necessarily get the ground up kind of rules ecosystem that has is thrived on Wikimedia and is controlled by advertising interest bees. And I can go into a bit more into that. I was doing some research and I looked at some of the rules on advertising, advertiser friendly content grade lines and you can imagine the kind of content that's prioritized. On YouTube, for example, what may be subject to limited monetization or demonetization are controversial issues in sensitive events such as war, death and tragedies, political conflicts and content that is made to appear appropriate for a general audience but contains adult themes including sex, violence and vulgarity.
And then on Facebook under a tier monetization system, debated social issues are subject to limited monetization if they focus on the debated social issue that has little or that has, that uses language or gestures that could be considered confrontational or controversial. So you can imagine a kind of incentive structures that are set up on these platforms that I don't think you would get on a volunteer run platform like Wikipedia.
>> MODERATOR: I'm intrigued to ad rules essentially govern the spaces where our public online discourses are happening.
>> AMOS TOH: Yes. They shape it. So there is an issue of whether content is left on the platform or let down and that's the community standards review pipeline, and, you know, the legal restrictions that may also apply. But then there is the issue of what content is incentivized. And that kind of leads to issues around monetization and demonetization and it's unclear how much impact these standards on advertising friendly content, I love words, that those standards, it's unclear whether there is a link between those standards and recommendation systems on platforms that determine what content is prioritized on news feed, for example.
>> MODERATOR: Thank you. Super interesting. Maybe coming back to you Santiago from a public sector perspective, our political discourse or public discourse moves to the Internet as Amos as told us, people are forced onto the Internet to actually benefit from public services often, whether they want it or not.
How does a Government maybe like yours think of these public discourses happening in a platform that may be opaque and whether rules such as ad rules may actually shape the discourse? Is there maybe a need for a different kind of platform? Maybe ‑‑ I'm not sure, can begin answer that, does there have to be a Government platform? How can Democratic discourse happen elsewhere?
>> SANTIAGO AMADOR: I want to say something at the beginning, and it is that I think one trend is damaging participation in the public sphere on the Internet and it's the CO rated apps. Mainly people from low socioeconomic areas are accessing the Internet, but they cannot afford a full Internet plan, so they end up, I mean, having just WhatsApp plan, you know, or maybe just Facebook plan.
So a lot of Colombians in sector one and two are just accessing private conversations. They are on Internet and they answer, they are using Internet, but they are not actually participating on conversations. It used to be different when Net Neutrality was the trend and the principle, but not anymore. So I think this is one of the things that we have to do something about it.
But I don't think that a Government, I mean, for two reasons, I don't think that a Government can actually promote and compete with a private platform.
I think it sounds absurd in current times, and also because people just don't want to be in the Government platform. I mean, it happens that public agency creates an app and nobody wants to have this app. You have a limited amount of capacity in your cell phone and you always prefer to erase the public app, and keep Facebook and keep InstaGram or maybe and keep, of course, WhatsApp.
So I think there is nothing that a Government could do different from being better in closing social gaps. The best Internet policy for me at this point is to close the social gaps, the gender gaps, the income gaps, the age gaps and the using of the Internet. So we are so focused on regulating the Internet, but I think the problem is more systematic than regulating the Internet.
>> MODERATOR: Thank you. Super interesting. So maybe if the public app, the public platform isn't the solution, and I was kind of expecting that answer, so how do we actually make sure that the people who should, who we want to engage in public discourse and Democratic discourse, how do we make sure they are able to shape the rules under which they have to basically live on platforms? In your research how can we make, maybe you have ideas for us, house can we make sure those people are heard in ways that enable them to meaningfully participate?
>> DEBORA ALBU: Thank you.
Maybe a good start is to say how many young people I see in this room, which is great actually, and to see that these people are coming into the conversation literally in maybe one of the most privileged spaces as well. So I think in that sense, this could be a good way to start on how to include these marginalized groups is to give them a seat at the table. And maybe just two touch on some points as we went into the conversation, zero‑rating is also something that is happening in Brazil, so people only have access to WhatsApp and many of the cheaper data packages and this is a problem, right, because sometimes this is the Internet.
This is what people know as the Internet. And this is first not, it shouldn't be the main resource of information, right, on Whatsapp or Facebook or whatever. People should have access to everything that is online. So Net Neutrality is also an issue. I just wanted to echo that and talking specifically about our region and talking specifically about our country.
And I think there is, again, on that sense, in that sense, another issue which is regarding what type of content is available online. So we already discussed language, for example. But I think we haven't come at disinformation and misinformation, and this is an issue when talking about young people or marginalized people in general. Are these people accessing good quality information, and if they are, how can they make a good use of that, a meaningful use of that?
So maybe to answer your question on how can we guarantee that? To some extent is, yes, giving these people a seat at the table so they are the ones discussing the policies, they are the ones discussing the rules. And, again, I will stop here and then we continue.
>> MODERATOR: So regarding language and community standards, but also maybe intentions, Chenai has brought up the example of women posting ""men are trash"" on Facebook. At scale, I think we all agree that these community standards that may or may not prevent that kind of language or content can only be policed through automated means, and I know, you, Amos, have done research of automation on platforms. Does that work for the problem that we are actually trying to explore today? Can automatic means or automatic content detection systems touch context?
Can they make sure that people can find the contents, the legal content that they want to see online and that they are able to participate, not just find, but also share.
>> AMOS TOH: Yes, I mean I think that there are several kinds of automated systems as far as we know that are operating on the big kind of commercial social media platforms. I think obviously we see automated flagging systems, right, that flag either content that's legally restricted or that may lead to violations of community standards and to companies' credit they emphasize that they use it as flagging rather than automatic take down mechanisms.
And then there is the automation ‑‑ where I think there might be some level of automated takedowns happening is in the automation of enforcement of advertising‑friendly content guidelines.
This is where we see some issues about some of these criteria such as criteria for sexually explicit content potentially becoming a proxy for disproportionate demonetization of LGBT content, for example. And that actually has culminated in a lawsuit against YouTube in the United States by LGBTQ content creators.
And then the third area of automation that we see that is potentially problematic from a human rights perspective is recommendation algorithms, recommendation systems that automate what the prioritization of particular kinds of content over others. I want to say that sometimes we think about automate the systems as standalone systems but the ways in which these are deployed and the purposes for which these are deployed are companies. I can turn the question over to you because I have heard this before about how Wikimedia uses automated systems and it's a very, very different use from commercial platforms as I understand.
>> MODERATOR: Thanks, always happy to speak about that, of course. So just quickly, yes, we at Wikipedia, there are no ads. So there are also no rules that would govern monetization or demonetization of content around ads or recommendations like that. There are, there is one system that is currently being developed and also already deployed for some areas of Wikipedia that is called the Objective Revision Evaluation System, which is a system, ORES for short, that basically evaluates edits, new changes to Wikipedia Articles and whole Articles, and basically sort of helps editors judge whether an edit is bad, is disruptive to the encyclopedia or not.
However, it does not make automatic decisions about removal. It flags editors. We call that a system of AI plus human review, Azerbaijan I being the umbrella term for machine learning systems and we believe that essentially it should be editors so humans making the call whether this is context‑dependent and a good edit or a bad edit. And that is, that really helps editors to basically evaluate things more quickly, because there are people, so called patrollers who look at new edits coming in and they save a lot of time if they are just flagged for bad edits rather than going through every edit themselves. So they look at the possibly bad edits.
So that's sort of like the first example that we are working on, the first such system, and that really also I would say exemplifies about how we think about AI, that it should really aid people to improve discourse rather than shape discourse.
>> AMOS TOH: Just to follow up on that, I think that's a really good contrast for illustrating that it's not necessarily automation, per se, that's the issue, but the other and technical structures in which its embedded. So in the context of community standards review, right, what automated detection systems are being used on the big commercial platforms is really to enable employees or independent contractors working for the company to enforce top down community standards.
But then, you know, a similar kind of automated detection system may be used in the Wikipedia context differently to help community appointed editors kind of evaluate edits. So it's a more kind of democratic structure in which automation is serving.
>> MODERATOR: Thanks Amos, always good to hear that perspective on the work that we do. Chenai, going back to this example that you noted about the women posting ""men are trash"" how can you push back as a group that is possibly pushing stern boundaries, pushing community standards as they exist now, what can people do to actually make sure they are heard? It doesn't have to be that example. Youth are often pushing boundaries, arts is. So if your content is removed from a platform and the norms need to somehow change, how can you engage with a platform to get that done? How do you do this? Or how is this currently being done?
>> CHENAI CHAIR: One of the big strategies has been people voting with their feet literally leaving the platform and going elsewhere. With Facebook you think you leave the platform but you are still somewhere tied to it. So you find that, I know for a lot of young people, for those young people with access to the Internet and good phones they are moving away from Facebook and going to platforms like TikTok. But there is a who issue of going on with TikTok and online bullying and racial content so I will not touch on TikTok for now. But coming from, like, feminist communities and trying to understand women's participation online, and that gender diverse approach to it is that we have then gone onto create strategies where the engagement is that if this platform doesn't allow us to have this particular conversation, and if it results in some of our community members being cut off from the platform, then the question is you create alternative platforms.
I think, I met someone at an event yesterday who created a Twitter for sex workers. It allowed them to engage without being flagged and taken off and then to engage with their own online rights. So you find that there is a need to create alternative spaces and I think there is also a need for ‑‑ the reason why Facebook, Whatsapp and those concentrated platforms are possible is because you are able to curate your own content according to who it's coming to and you can get it in the language you understand, and it's cheaper to access and go beyond those platforms.
So the question becomes how do we then create, we are talking about using arts, how then do we create communication and speak to communities that, for example, if you are a young woman who is an entrepreneur and now there is an online shop on InstaGram, that should not be your stopping point. You should move to other platforms so that when your page gets taken down because you violated community guidelines, you have your content elsewhere.
So I think that now is work that comes from the educational training we need to invest beyond these online platforms. Yes, we can have an Internet Governance Forum focusing on how do we make the platforms work, but one of the mandates for me that I would like to see if we are looking at education as people who work within the Internet Governance space, we now need to deal with educators that are not necessarily in the room, they are developing curriculums, they are teaching students and they are the ones we then say here is a computer program you can run with.
So I think alternative spaces, educating people who are on these platforms and looking into building more of the social gaps that were identified, that's where the investment really is in trying to make sure that as people come online they know that they have alternatives as well.
>> DEBORA ALBU: Maybe to give an example of what this going away is or creating platforms. There are feminist servers nowadays so the idea is if you can't have your content hosted in a certain specific way, then, and this is becoming commoner and commoner, and it's, again, we touch on another issue, which is, okay, who is actually creating technology? Who is designing technology? Do we have enough of these diverse groups designing and creating technology, women, young people, black people, just to chime?
>> MODERATOR: Do we then raise fragmentation of the discourse. If everybody goes to their own silo. I know we are aware of these different pushes, delete Facebook, delete whatever, and then we are left with what exactly? Where do we go? Where do people go to talk to each other? Is there a risk of that versus the current risk of siloing?
>> CHENAI CHAIR: At the end of the day the risk of siloing emerged because there was one powerful entity defining how to engage. So you have the option of if I play by the rules of what has been defined on the platforms, I risk self‑censoring myself. So are my opinions, that freedom of opinion that was highlighted which I am going to carry everywhere with me from now on, but if then we want to, I was a part of the approving panel for this slogan, but at the end of the day, One Net, One Vision, the power issues mean that someone else has power to defend that this is the kind of net we want.
Not everyone, it's good to engage with them to point out was this the vision of what the Web is right now, is it what he had at that time? And if we had thought about it going beyond how it is, how do we ensure that even in that fragmentation, they still have conversation. So I think we need to recognize those differences that all of these spaces have rules that will leave other people out.
>> MODERATOR: What is the public sector, the Government perspective on this?
>> SANTIAGO AMADOR: Unfortunately no opinion local Government. It happens, I mean, this is not a topic that Governments are discussing right now. The only thing I can say is the Internet is already fragmented. We are not using the Internet. We are we are using this specific app to do very specific things. And actually, most of my conversations, and I will say most people in Colombian conversations and other countries are Whatsapp, we are building Whatsapp groups with 200 people, and then I have one for this interest and one for this interest and one for this interest. So as advertising is recommended as well, our interests are fragmented as well.
So I know I can talk a very specific topic with what, what kind of, I mean, on a specific type of people. And then if I do, if I want to go to the Java group, and then I talk about joint chiefs of staff have a with those people, but this general conversation maybe it happens sometimes when someone posts something and then in the discussion part of the post, I mean, the answers to the post, people are discussing briefly, and if they disagree, they prefer to go and just leave the conversation. It's not about the tool. It's about what are we confronting the public discussions.
So this, I mean, we have biases, for instance that prevent us to engage in public conversations because we want to be reinforced in our arguments. So I prefer to discuss with someone that is going to say that the things I'm saying are clever or are right. But then I fight a little bit, but I'm tired of fighting and we are tired of fighting. There is a lot of polarization on the public discourse, politically for religion.
So maybe we are tired of being in this public Internet, so we are relating with people that are thinking as ourselves.
>> MODERATOR: Thank you. And before we go to Q and A with the room, but also remote participants hopefully. I wanted to ask each one of you for a click sort of blue sky scenario. What should this online discourse, this participation for everybody look like just maybe two sentences starting with you Amos.
>> AMOS TOH: I think that in order to kind of create an online discourse that we want I cannot emphasize enough the need for looking at and remedying some of the structural gaps we see in incentives but also in resources of bringing people online. And some of the structural barriers are really important, right, to resolve before we even have the conversation about what kind of online discourse people who aren't yet online would want.
>> DEBORA ALBU: One million dollar question. I will go with the take on healthier publish sphere. If we do have a healthier publish sphere outline in the sense of polarization, in the sense of prejudice, in the sense of harm, in the sense to attacks to human rights, then we can also have a healthier public sphere online.
>> CHENAI CHAIR: So are we collecting the million dollars after this?
>> MODERATOR: I don't have it with me right now, but. It depends on the answer.
>> CHENAI CHAIR: Public discourse that allows people to be critics and taking body, content and interest, so centering people and not just looking at them as producers of data.
>> SANTIAGO AMADOR: In terms of the what Government could do, maybe be better giving or employing education, bettering the education systems so people can build their voices. That is one. Critical thinking as well. Fake news are damaging the public sphere so much, so the fact that you can, you can distinguish between good information and bad information make a lot of sense.
And as I mentioned, reducing, I mean, trying to reduce all of the social gaps and gender gaps in offline world that, because the Internet is not changing those gaps, or just incrementing those gaps. The Internet is not the equalizer. It is not capable to equalize. It amplifies what is happening in society. So I think we should work more offline to bridge those gaps, and then people are going to be better in public discourse when they are online.
>> MODERATOR: Thank you to all of you. For this conversation, we now have 20 to 25 minutes for questions from all of you to these four wonderful speakers. Please take the opportunity of them being basically captive over here. They can't walk away and they need to respond to your questions. I also want to encourage the, my co‑organizers in the room to ask questions should they have some, and Christian, maybe you can let us know whether there are questions online as well.
Maybe we will start with, are there any questions in queue, in the app, Christian? No, not so far. Okay. So maybe we can start over here, please raise your hand. We don't have name tagged to raise. So just raise your hand and I will try to keep track of you. Over here, please say your name.
>> AUDIENCE: My name is Parsis Ajiv, I'm a writer researcher based in Bangladesh right now. Thank you for a very engaging discussion. Coming from Bangladesh we are actually, I mean the Internet use is very high compared to many Asian countries in Bangladesh. If we do look at the data, there is a certain urban rural gap, there is a gender gap, but I would like to come hearing this discussion and kind of from my experience. Is, for us to think about the pitfalls and costs of inclusion, for example, in Bangladesh, for example, now getting a phone and SIM is quite affordable. People actually have multiple SIM cards, but now for in the last three, four years because of Government regulation to get a SIM card you need to input, like you need to share your biometric information, and you get a SIM card against your national ID, which actually seriously affects kind of Democratic space and participation.
So people are, I mean, there is a lot of benefits to be on the Internet.
It is a necessity, it is a convenience, but at the same time it's seriously hard actually the kind of configuration of public space. In terms of kind of language and inclusion and an example I can give from Facebook users from research I have seen, I mean on Facebook you can actually quite easily kind of communicate in Bangalore. But women, people who identify as queers, for example, have faced and systematically face kind of insults, slurs, threats of violence, harassment and if they are set in Bangla, for example, Facebook has no intention to kind of address.
If we talk to them, they will say they have no capacity to kind of try to understand, like what kind of ‑‑ in Bangla, what is a threat and what is an insult. They will say they have no capacity. I will say they don't have the intention to address this issue. So it's kind of just more than a language barriers, because people can participate in Facebook or Whatsapp or what in Bangla, but the kinds of hostile situation they face, right, these platforms have, again, no intention of addressing that, and they fall back on we don't have the capacity to do it. Thank you.
>> MODERATOR: Does anybody want to respond to this.
>> CHENAI CHAIR: It's a point I wanted to raise in terms of the politics of moderators, even though AI is assisted, it's flagged and someone assists with it. The biggest challenge I face is the context to which these moderators are coming from and missing those nuances. I don't know how many Wikipedia has, but I am in terms of like moderators that support when AI is flagged, but I am fully aware that there is a need to unpack the politics of those moderators because we can't assume that just because you are given this job to prefetch the Internet that you are going to leave your politics and whatever biases you have. So I think those are some of the areas of research we should work on and thank you for reminding me to raise that point.
>> AUDIENCE: My name is John Weitzmann from Wikimedia Germany. Sorry, again, the Wikimedia space asking a question or two questions actually. One would be, Amos highlighted that a lot of the rule making is basically rule making by the advertising world because advertising is dominating in so many parts of the Internet, and the reason for that is obviously because it's financing large parts of the Internet. So that's what’s paying for that.
So wouldn't it be also necessary to get to a new baseline of running the Internet instead of on advertising money on something else, and what would that be? So that would be the first question. The second question maybe to all of you, and especially, for example, Chenai and the others, you sounded like it's a discussion about lack of capacity of those people coming online, but isn't it also a chance that some of those communities have not taken part in the discussions of ten years ago and are not biased in ways that we, for example, are biased when talking about rule making?
>> AMOS TOH: So I think that is precisely the question that needs research and at some point advocacy. I think we see discussions about what alternatives to advertising‑based models of social media platforms or other kinds of Internet platforms, but I don't think we see enough of those discussions. I do think that, you know, at least in certain contexts and not to fall back on the Wikipedia example again, but there are kind of ideas about kind of community‑led networks that are volunteer run.
And even in contexts where there are financial incentives such as in the case of Reddit, for example, it's a kind ever radically, as a much different way of kind of approaching how content is shared and accesses in Reddit, for example, you have different sub Reddits where there is some level of community control over different streams of content rather than a top down structure. So I think even within an ad‑based model, there is kind of room for innovation on how we think about content can be accessed and moderated and, accessed and moderated and distributed.
>> DEBORA ALBU: Just maybe touching on your other question regarding lack of capacity, I think it's a problem talking specifically about young people. It's a problem when we have very adult‑centric approaches to what media literacy is, to what critical thinking is, to how we distinguish again good or bad quality information. So I think it's very important that we kind of take our glasses off and put young people's glasses to see what they are seeing in terms of what is good or bad content for them.
So just an example in that sense, it's a research that we have been doing at the institute, but together with a network of other researchers based in Latin America and the Berkman Klein Center at Harvard University and through the research what we want is to understand from their perspective what digital skills mean, what good quality content means, what are the lack of content or content gaps that they see.
And in order to do so, we are using workshops and design thinking methodologies to actually, you know, we take, literally talk our shoes offer and say, okay, what are your takes on this. And sometimes what, I don't know, I think basically every research that is done of young people or about young people is not necessarily inclusive of young people and their very design.
So we want to change this notion, this very idea so that we see from their perspective. And this is about young people, but it could be about any other marginalized groups as the ones we have been discussing here. So I think lack of capacity, not necessarily means what we think lack of capacity is.
>> CHENAI CHAIR: I have done research on young people in African countries and how they engage online, it was four countries, Tanzania, Rwanda, Nigeria, I was learning in terms of the rules of engagement to circumvent when they have been kicked off a platform, this is how they then go and create a new persona or identity to engage on the platform and some of the ways they were using the Internet to address these issues.
So for me to hold back the mirror to myself, lack of capacity is what would fly in the policy context because this is what the language says, therefore, if they are not adhering by these rules they have a lack of capacity. It's a great opportunity because then you find people are being innovative to actually, as what Deborah was saying they are being innovative to participate in these spaces and figure out how to engage.
So, for example, a colleague of mine and myself, we set up a feminist while African platform on Whatsapp because we knew if we were to have it on Facebook we would be kicked off but we know how to engage on Twitter and we had our account kicked off because we were from different places. What we realized is we were able to take best practices from different platforms that work for us and because it's a smaller group, we were able to say on the rules that work for you and the rules that people were coming with were not under online spaces, rules around self‑care, acknowledging the stigma of leaving Whatsapp group. I come from a society if you are leave a Whatsapp group you are going to be flagged for a year.
So taking that context into account and that is an opportunity that I think these platforms and Wikimedia is open to it as well, and I know you guys support knowledge, but the opportunity that those with powers are able to take their rules and say what does it mean as new people are coming online. So I think that is also an opportunity for everyone.
>> MODERATOR: Further questions in the room?
>> AUDIENCE: Hello, this is Fabro Steibel from ITS, same institution as Debora. I have a weird question, but it might be relevant. It's about the gender setting. Every time we put an agenda first, something rolls back. So I will give one example, yesterday we were discussing about 5G, so 5G should be a ploy priority and so on. At ITS we are discussing for five years connecting public schools to the Internet, half of the schools in Brazil were disconnected the other half have 2 megabytes per second which is not kids Internet. So thinking about as IGF, what do you think is the one topic that I think this is relevant to inclusion and other topics but it should not be a priority. What kind of hype might we be seeing that it might lose the focus of something that is more important for inclusion? Framing the question, if you could say something that everyone is talking about but you think we should go back to basics and say there is something before, what would be the high topic that you see in IGF growing and then maybe a minor alert saying this is relevant but let's not put this on top of agenda because something else should be in the agenda?
>> AMOS TOH: So, I mean, that's a really important question. There are certain issues that are completely left, certainly I think it's been left off this year's IGF agenda as far as I know, that kind of relate to the intersection between socioeconomic rights and the Internet, when we talk about access to knowledge, also really depends on what kind of knowledge we are talking about, for example, in the context of the provision of essential public services.
We are not necessarily talking ‑‑ well, we are talking about privately run systems, in the delivery of public services such as benefits but I think there is greater room to talk about what public‑run platforms for accessing benefits applications and welfare services should look like and how it should involve the very people that are directly affected by these platforms. And I think those kinds of discussions just need to happen.
But then in the context of what we are talking about and kind of some of the issues around access to knowledge in the social media context, I think it's the framing that's important. So we often think of YouTube as a social media platform, for example, but I think there are certainly important questions about whether YouTube is also a digital label platform because you are talking about people who are supplying content and making livelihood of content they are creating for YouTube as a result of the financial incentive structure that is put in place.
And I think that goes for several other platforms as well, not just YouTube, but so if we kind of shift the conversation, right, from some of the information‑related rights but to include kind of information about labor rights about what are the rights of YouTube creators, content creators, then I think that in effect broadens our conversation about access to knowledge.
>> CHENAI CHAIR: So being part of the planning committee for this Conference, in looking at inclusion, and every other, because the three thematic checks were data governance, inclusion and safety and security. So I think a lack of intersectional understanding of these issues. At the end of the day if you are looking at data governance to what extent are the current models around data inclusive and who exactly are they representing. Yesterday there was an example from a colleague from a point of view Mumbai, and they pointed out that the level of inclusion at IGF starts from the sign up form where it's he or she and they did not identify with that, and then the document that they are supposed to present is that ID document, yes, we are having discussions around digital IDs and the validity and who they leave out.
So I think for me, what takes it up a step of what's not left behind is we are not having the conversation around the intersectionality of all of these issues and what it really means for people to participate in these spaces.
>> SANTIAGO AMADOR: I just wanted to add something very short. We are talking here a lot of Artificial Intelligence, for instance, but I think in Colombia we are so, so, so behind and we have to figure out what to do tomorrow, not in 100 years, just tomorrow, because we don't have AI in our Government that are working today. But if we want to have this in 20 years, what should we do tomorrow? I mean, teach algorithms in the schools for children? Very basic. Or train teachers? Or do more data analytics or, so those things that are before Artificial Intelligence, we have to do something else before it to talk of Artificial Intelligence.
So those capacities, those skews that we have to form, the needed talent to build Artificial Intelligence in our own country, yes, we have to do many things before. Actually, we are now using other people algorithms, I mean, European algorithms, Asian algorithms, but we are not producing our own algorithm because we don't have the proper talent to do it, so maybe we are 20 years behind.
>> MODERATOR: Any other questions in the room? Otherwise, looking at the clock, we may go into a short summary or wrap-up of the session before we can leave you all out for lunch? Any further questions? Maybe also online? No. Okay. So inclusion of remote participants isn't great, apparently. We will try to do better next time.
>> AUDIENCE: My name is Juliet. I'm sorry, the voice is going, one question, earlier you mentioned the trend of parental mediation. It takes me back to the age when dial up phones and the parent would sit right by the phone and call, and you are like hello, friend. How is that playing out in the online arena? Parents, a very strong gate keeper of information, is there damaging for children who have to have that level of, who have some kind of parental mediation and how they engage on the Internet?
>> DEBORA ALBU: If my other colleagues want to contribute as well. I think when we talk about the parental mediation it's important to say it's not something good or bad. It means that it is important to have an idea of a shared responsibility between the child or that young person and their parents and the platforms and the Government. So there are multiple layers of who is taking responsibility for what that kid or young person is doing online. But she or he or they must be included in that sense as well.
So, and maybe another problematic to that is the fact that sometimes, many times these parents are not the ones who have enough digital literacy to understand what those kids are doing. Many cases anecdotal here, my mom comes to me and says how do I do this. And I have to add the same time teach her and imagine what happens to kids in that sense. So they are the ones who need to at the same time be teaching their parents of what that is, but be educating themselves about what content or skills they are going to develop online.
So that's another problem here, and I think it's very important because we talk a lot about digital literacy and media literacy to young people but we are not talking about it, you know, to other generations who are not digital natives who don't know how to access and to actually profit from what's good out there online.
>> SANTIAGO AMADOR: When I was in Government, we had to deal with this online child protection. And then we did a little bit of research and focus groups and we realized that parents think there are two different worlds, the online and the offline. And the online is, of course, real, and it's here, and it's even more dangerous, but they just think there are two worlds, and danger is nothing is going to happen. And actually when they are busy, they just give a kid a tablet so they are quiet for a bit, and they don't think that there is an actual world inside.
>> FIESLER CHRISTIAN: We would have a question from the online participant and that would be to the panel, what do you guys think on digital inclusion and those with special needs, and how it should fit into the core values of the Internet after all it is meant for all.
>> MODERATOR: Thank you. Who wants to take that one?
>> DEBORA ALBU: Maybe give an example. At the institute for information and tech, one of the things we do is literally develop technology. So we have mobile apps or platforms or chat bots, and this is something that concerns us from the design of those technologies, so it's important for the team to not only seek knowledge in terms of how to integrate disabled people, but also look for people who can actually tell us, again, it's about these people's needs. It's about how they want to be included.
So just going back to the other point I made about not having an adult‑centric perspective, it's about not having an enabled perspective as well.
>> AMOS TOH: Yes, I think that's a critical question that alleviates the tensions in digital rights discourse. When I spoke to disability rights groups in the U.K. about some research on universal credit, which is the U.K. welfare system, the objection wasn't that the Government was moving welfare benefits applications online. The objection is how it was being done because, you know, offline alternatives aren't really beneficial for people generally and certainly not for people with disabilities. And moving, and the migration online actually may help people with disabilities better access their benefits and manage their benefits, but the problem is that without adequate input or input at all from people with disabilities, we end up with a very ableist kind of system that, you know, overlooks their needs or that are actively hostile to the needs of people with disabilities.
And then I had another point which I completely forgot.
>> CHENAI CHAIR: That's a very good question and it always comes up and one of the biggest challenges speaking from a research perspective is that we do often overlook people with disabilities and even when we do say we are collecting data to inform evidence‑based policy, it seems as if it's the tools that we make use of, we design tools that make it difficult to get to these communities where there is a need for intentionality to actually be able to say if we want to have evidence‑based policies that address everybody, then we need to actually take into account the kind of tools we have to ensure that we can reach these communities if we are using surveys that are meant to be nationally representative. We may not have access to the person who is differently abled.
So I think probably I'm calling myself out, but I do think that there is need for more intentional research that actually does try to look at the intersectionalities and the champion gender and but also look at gender and disability. I forgot the name of the Article, but I do know my colleagues in India did start a specific Article that looks at how people who are disabled are actually finding pleasure online, how they connect with communities and to move away from trying to just perceive them as not having some form of agency.
So I do believe there is need to invest in research to understand those issues.
>> SANTIAGO AMADOR: Regarding people with disabilities, I just want to bring, I think a successful case that we did in Colombia and maybe could be helpful for the rest. We changed the approach from the demand to the supply. When you are asking the demand, people with disabilities have to ask Government to do something and from supply, the Government just guarantee that it can access. So we by a country license of a software for people with visual impairment, the famous JAWS, I don't know if you know it, it's a very powerful software that enables blind people to access the Internet.
So instead of people having to ask for every single, I mean, for a license which is very costly, we install this license in every single computer in the schools so people can go and start it. Because people in schools say, no, we don't have blind people, of course, you don't have blind people because you don't have the tools for people to study.
So we decided to change the approach to the supply approach and we install it in every single computer so people can access. And also, I mean, creating a model with economy of scale, so we don't pay by license, but we get a good price for the country license. The cost was $1,500 per license per year and then we get like just one dollar per license per year doing this kind of economy of scale.
>> MODERATOR: Thank you. I'm glad we did get a question online, and a good one actually. Thanks for sharing this. We have about five minutes left until I guess we lose the room. I want to turn over to Fiesler Christian who is a co‑organizer and leads the Nordic Center for Internet Society in Oslo for a quick summary and wrap up of our conversation.
>> CHRISTIAN FIESLER: I would like to wrap up the session with a summary of Ms. Impressions. I think we all had interesting discussion starting out on the matter of access, which I felt very much concerned about the idea first about urbanity as a divider in terms of where people have access to the Internet and what type of Internet. You raised the question of intersectionality as a lens to understand participation. And you raised a very important or interesting point about different levels of digital divide. I didn't know myself actually that we by now have seven levels, very interesting, where you essentially are pointed to the idea of intentionality.
You continued with the idea of freedom of opinion, in particular also the question of who is protected and how do we essentially differentiate between the rights to have a civil discussion and to have legitimate counter speech to certain developments, and I think the discussion then went into the direction of incentive structures, the idea of what type of speech should be moan tied and whether that in any way shape or form has an incentive of what speech is actually allowed which is given. You had an interesting discussion then leading into the overall opening up of the room of what type of platforms we want to have, especially when it comes to the capability of pushing boundaries. You mentioned the idea of exit of platforms, but also the question of whether that leads to any fragmentation. I think the consensus of the panel was very much we already have some type of fragmentation of the Internet.
However, these fragmented platforms are essentially the global players, right? WhatsApp was a commonly cited idea. I think going forward or I think kind of like one of the big questions which then also came up in the overall discussion I think might not only be one of access, but of speech and what type of speech can actually be monetized, kind of like the connection of expression of opinion and finding remuneration in terms of engagement on the Internet. I think those were the, to me, the interesting points you raised.
>> MODERATOR: Thank you, Christian. And right on time, please join me in thanking our panelists, Amos, Chenai, Santiago, Debora for this interesting discussion, taking the time and thank you for joining us today. I hope you have a good rest of the day and of IGF. Thank you very much for coming to this session.
(Applause)