The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.
***
>> IAN BARBER: Hello everyone? Yeah it's working. We're good.
Good morning. Hope everyone is doing well. Thank you so much for joining this session. One of the many this week on AI and AI governance.
But with a more focussed view and perspective on global human rights approach to AI governance, my name is Ian Barber. I'll legal lead at Global Partners Digital. A Civil Society organisation based in London working to foster an online environment by human rights.
We've been working on AI governance in human rights for several years now. So I'm very happy to be co‑facilitating this alongside Transparencia Brasil our online moderator. Thank you very much.
What I'll be doing a providing a bit of introduction to the workshop. Setting the scene. Introducing our fantastic speakers both in person and online. And providing a bit of structure as well for the discussion that we're having today and some housekeeping rules.
Really this workshop is meant to acknowledge that we stand at the intersection of two realities. Increasing potential of artificial intelligence on the one hand and ongoing relevance of the international human rights framework on the other.
When we think of a human rights‑based approach to AI governance, a few things come to mind. Firmly and truly grounding policy approaches in the international human rights framework. Ability to assess risk to human rights. Promoting open and inclusive design and deployment of AI. As well as ensuring transparency and accountability amongst other elements and measures.
And given this, it is probably not news to anyone in the room that the rapid design development and deployment of AI demands our attention, our understanding and our collaborative efforts across various different stakeholders.
Human rights which are enshrined in various sources such as conventions, and customer international law and its dynamic interpretations and evolution it. Works to guide us towards our world continually where people can exercise and enjoy that you are human rights. To thrive without prejudice or discrimination or other forms of injustice.
And like any technology, AI poses both benefits and risks to enjoyments of human rights. I'm sure you have attended other sessions this week where you've spoke in a bit more detail about what those looks like this various sectorsen and different civil, political, economic and social rights. But today what we're going to be doing is narrowing in on a few key questions. The first is "how can the international human rights framework be leveraged to ensure responsible AI governance in a rapidly changing context and world that we live in?
And I think this question is important because it underscore Hough AI is now able to influence so many things, from our job prospects, our ability to express ourselves, legal verdicts, and so how do we ensure that human rights continue to be respected, protected and promoted is key.
Secondly, we must reflect upon the global implications for human rights in the, kind of, ongoing proliferation of AI governance frameworks that we're seeing today. And also, and in the potential absence of affected framework, what is the result and what are we looking at? There is this on going proliferation of efforts. I'm at global regional national level to provide frameworks, rules and other normative structures and standards that are supposed to promote and safeguard human rights.
For example, just highlight a few. Ongoing efforts a the Council of Europe to develop a new treaty on AI. And ethics of AI. And other efforts such as the more recently proposed UN high level advisory body on AI.
But at this point we've jet to see comprehensive and binding frameworks enacted at this point. Which might be considered, you know, effective and sufficient to protect human rights.
And without these safeguards and protections we therefore risk kind of exacerbating inequality, silencing marginalized group and voices and inadvertently creating a world with AI serves as morph a divider than promoter for equality.
So what do we want too see and what to do to ensure this is not the case and not the future that we're looking at? And lastly over the next 80 or so minutes, the path towards responsible AI governance is not one that can be kind of traversed alone. So we need to navigate these challenges together, fostering meaningful engagement by all relevant stakeholders. That is why on the panel we have voices from Civil Society, from private Companies, from the international organisations which are all needed. And we also need to particularly amplify voices from the global majority. Historically, many regions across the world are left out of global dialogues and efforts at global governance. And that is very much the case when it comes to AI as well.
So this workshop is not just a gathering I see. It is one. It is one for information sharing. But it is also a call to action. It is really I think the beginning of an ongoing collective effort to address a range of complexities that have come about from AI and to really work to ensure the ongoing relevance of our human values and human rights. With that I'd like to get the ball rolling. And drawing from the diverse range of experiences here, really talk about what we want in terms of a global human rights approach to responsible AI governance.
And to do that we have an all‑star line up of speakers from number of stakeholders. I'm going to briefly introduce them but I encourage you all to provide a bit more background on where you come from, the type of work you do and really why you are here today and your motivations.
And in no particular order. Marlena Wisniak from my left. And Rumman Chowdhury. And Tara Denham from global affairs Canada. And Pratek as well. And online. ‑‑.
If in terms of structure a bit of time on our hands and we're going to divide the session into two parts. The first part is going to be looking particular focus on the international human rights framework. And also this ongoing proliferation of regulatory processes on AI that I've kind of alluded to already. We'll then take a pause for questions from the audience as well as those online. And special shout out to marina from Transparencia Brasil who is taking questions and feeding them to me so we can have a hybrid conversation.
And after this first part we'll stop and then have a second part. And that will look a bit more at inclusion of voices in these processes. And how engagement in from the global majority is imperative. And that will be followed by a final brief Q&A session and then closing remarks.
So I hope that makes sense. I hope that sounds structured enough and productive and I look forward to your questions and interventions later.
But let's get into the meat of things. Looking at the international human rights framework. We're at a point where there are various efforts on global AI governance happening at breakneck speed. And there is a number of them that I've mentioned. Including the Hiroshima process that was just spoken about yesterday if you were at the main event.
So my first question and prompt to my left to Marlena, really given your work and efforts to advocate for rights respecting approaches on thesis types of AI regulatory processes, what do you consider is missing in terms of aligning them with the international human rights framework? And again if you could provide a brief background introduction, that would be great.
>> MARLENA WISNIAK: Hi everyone. And welcome to day 2 of IGF. Feels like a week already. My organisation, the European centre for non profit law is human rights org focusing on space, association the and shall freedom of expression and privacy. And over the past five years, we've noticed that AI was a big risk and some opportunity but great potential for harm as well for activists journalists and human rights defenders around the world.
So the first five years of our work in this space were rather quiet. Or I'd say for after a niche area with only a handful of folks working at the intersection of human rights and AI. And by handful, I really mean like 10‑15. And this year it's expanded. May be a ChatGPT Ish and trail blazer ‑‑ and human rights based approach to AI.
Ian mentioned couple of the ongoing regulations. I won't bore you this morning with lot of legalese. But the core frameworks we focus on advocate for human rights based approach. At ECL are the EU AI Act and laws are happening right now. Council of Europe convention on AI national laws as well. We've seen these expand a lot around the world recently. We engage in standardisation bodies. So the US NIST. National Institute for Standards and Technology. And EU. And international organisations like OECV and UN. And the Hiroshima process is one we're following closely as well.
In the coming years as the AI act is set to be accepted in the next couple of weeks and definitely by early 2024 we'll be following the implementation of the act. And so I'll use this as a segue to talk to you a about what are the core elements we see should be part of any AI framework, any AI governance from human rights‑based approach. And that begins with human rights due diligence and meaningful human rights impact assessments in line with the UN guiding principles for human rights.
So we see AI an opportunity to implement mandatory human rights due diligence. Including human rights impact assessments. In the EU space, that also involves other laws. But beyond EU globally, the UN and others. And have an opportunity right now to actually mandate meaningful, inclusive and rights based impact assessments. That means meaningfully engaging stakeholders as well. Stakeholder engagement is a necessary cross‑cutting component of AI governance, development and use. And ECNL we look at both how to govern AI and how it is developed and deployed around the world. We understand stakeholder engagement is a colaborative process where diverse stakeholders both internal and external, meaning those that develop the technologies themselves. Can meaningfully influence decision making. So on the governance side of things is when we consult in in these processes, including a multistakeholder forum like IGF. Do our voices actually heard? Can they impact the final text and provisions of of any laws or policies that are implemented. And on the AI design and development side of things when tech companies or any deployer of AI, consult, do they actually implement, do they include their voices? And do these voices inform and shape final decision making?
In the context of human rights impact assessments of AI systems. Stakeholder engagement is particularly effective to understand what kind of AI systems are even helpful or useful. And how do thigh work.
So looking at the product and service I'd of AI or algorithmic data analytic systems, we can really shape better regulation and develop better systems by including the stakeholders. Importantly, external stakeholders can identify impact on human rights such as implications, benefits and harms of the systems on people and looking at marginalizing already vulnerable groups in particular.
If you are interested to learn more about engagement, cheque out our framework for meaningful engagement. On Google. Framework for meaningful engagement. And we provide ‑‑ and these recommendations can also be used if are AI governance as a whole. PV moving on I'd like to touch base on transparency briefly. Which in addition to human rights impact assessments and stakeholder engagement, we see as a prerequisite for AI accountable and rights based global AI governance.
So not to go too much detail, but we believe that AI governance should mandate that AI developers and deployers report on datasets including training datasets, performance and accuracy metrics, false positives and false negatives, human in the loop and human review and access to remedy.
If you would like to learn more about that. I urge you to look at our recent paper, publish of access now just a couple weeks ago on the EU individual services act. Spotlight on algorithmic systems and we outline our vision for what meaningful transparency would look like.
Finally, access remedy is key part of any governance mechanism including both internal agreements mechanisms within tech companies and AI developers. As well as obviously state remedy at the state level and judicial mechanisms which are ‑‑ as reminder states have the primary responsibility to protect human rights. And give remedy when these are harmed.
And one aspect we often see in AI governance efforts, especially by governments are to include an exemption for national security, for counterterrorism and broaden emergency measures. And at EC&L we caution against overbroad assumptions that are too vague, broadly defined. As these can be at best misused. At worst weaponized to restrict on civil liberties.
So if there are exemptions for things like national security or counterterrorism in governance, we really urge to have narrow scope, including sunset clauses for emergency measures. Meaning that if any exemptions in place they will end with due time and focus on proportionality.
And finally, what is missing? So what do we see today both in the EU and globally is that AI governance efforts wants to take a risk‑based approach. And the risk part is often to finance, business. I mentioned national security. Counterterrorism, these kind of things. But rarely human rights. And the AI act itself in the EU is regulated under product liability and market approach. Not fundamental rights.
And our research paper of 2021, we outline key criteria for evaluating the risk level of AI systems from a human rights‑based approach. So that means that we recommend determining the level of risk based on the product design, the severity of the impact. Any internal due diligence mechanisms, causal link between the AI system and adverse human rights impacts and potential for remedy. And all these examples help us really focus on the harms of AI to human rights.
Last thing and then I'll stop here. Where AI systems are fundamentally incompatible of human rights, such as biometric surveillance deployed in public faces, including facial and emotional reckition in. We along with a coalition a advocate for ban of such systems. And we've seen proliferation of laws like in the U.S. for example T state level. And right now in the latest versionle of AI act adopted by the European parliament of such bans.
So that means prohibiting the use of face reck vision and technologies that enable mass surveillance and ‑‑ in public and publicly accessible spaces by the government and we urge the UN amid other processes such as the Hiroshima to include such bans. Thank you Ian.
>> IAN BARBER: Thank you. That was amazing. I you actually just followed up to my immediate question which is what is really needed when it comes to AI systems that do measure unacceptable risk to human rights. So thank you for responding and I very much agree that having mandatory due diligence, including impact assessment of human rights is imperative. I think the what you spoke to in terms of stakeholder engagement rings true. As well as the I shall of transparent and need for that to foster meaningful accountability and introducing remedies. So thank you very much for that overview.
Based on that. Considering there are so many initiatives and elements to consider. Transparency accountability or scope. Tara, given all this. What are terms of bio your domestic priorities and regional and international engagement. So if you could speak a bit to how these are all feeding together. That would be great. Thank you.
>> TARA DENHAM: Thank you. And thank you for inviting me to participant. I'm director general of the office of human rights freedoms and inclusion at global affairs Canada. Which I think also warrants perhaps bit of an explanation. But I think actually aligns really well as a starting position.
Because within the office of Human Rights Freedoms and Inclusion is actually where we've been embedded the vision for ‑‑. And so that was our starting point for which since the integration of those policy positions and that policy work a number of years agos, it was always starting from a human rights perspective.
And so this goes back I think about six or seven years that we actually created this office and integrated the human rights perspective into our digital policy from the beginning. And some of our initial positions on the development of AI considerations. And the geopolitics of artificial intelligence.
So I think that in and of itself is perhaps unique in some of the structures. Having said that, I would also acknowledge that a lot of the government structures we are all trying to figure out how to approach this. But as the DG responsible for these, it does give a great opportunity to from the beginning integrate that human rights policy position. When we were first starting to frame some of our AI thinking and from a foreign policy lens, it was always from the human rights perspective. I can't say that that has always meant we've known how to do it. But I could that's always been pushing us to think and challenge ourselves how to use the existing frameworks and how could we advocate that at every venture, including domestically.
So I want to give perhaps a snapshot of framing of how we're approaching it in Canada. National perspectives and how we're linking that to international. And of course integrating how we address some of the integrating and diversity voices into that in a concrete way. So I would say when we started talking about this a number of years ago it was the debate. And I'm sure many of you participated in this debate. It was a lot around, you know, should it be legislation first? Should it be guiding principles? Are there frameworks? Are we going to do voluntary?
For a number of years that was the cycle. And I would say over the last year and a half to two years, that is not a debate anymore. We have to do all of them and they are going to be going at the same time. So now I think where I'm standing is it is more about how are we going to integrate and how are we going feed off of each other as we're moving domestic at the same time as the international.
So we have to, you know, typically from a policy perspective, you would have your national positions defined. And those would inform your international positions. And right now that's the world is just moving at an incredible pace. So we're doing it at the same time. And we have to find those intersections. But also takes a conscious decision across government. And when I say across government, I mean across our national government.
And of course this is within the framework we're all familiar with, which is domestically, we're also all aiming to harness AI to the greatest capacities because of all the benefits that there are. But we're always very aware of the risks. So that is a very real tension that we need to always be integrating into the policy discussions that we're having. And our belief and our position in our national policy development and international is that is where the diversity of voices are absolutely required. Because the risk views will be very different depending on the voice and community that is actually, that you are inviting and that you are actually engaging in a conversation in a meaningful way. So it is not just inviting to the conversation. It is actually listening and then shaping your policy position.
So in Canada what we've seen is ‑‑ I'm not going to go into great detail. But just to give you a snapshot of where we started. Like within the last four years, we've had a directive on how automated decision making will be handled by the government of Canada. And that was accompanied by an algorithmic impact assessment tool. That was sort of the first wave of direction that we gave in terms of how the government of Canada was going to engage with automated decision making.
Then over the last little while, again, in the last year, there's been a real push related to generative AI. So now I think it was just the last couple months, there was the release of a guide on how to use generative AI within the public sector. And a key point I wanted to note here is that it is requirement to engage stakeholders before deploying generative AI by the government of Canada. Before we're actually going to roll it out, we have to engage with those that will actually be impacted. Whether it be for public use or service delivery.
And then just last month, a voluntary code of conduct responsible development and management of advanced generative AI systems. This again we've seen the U.S. with similar announcements. We've seen the G7 work that we're doing in the G7. And lot of these codes of conduct and principles coming out at the same time.
And this is also accompanied in Canada by working through legislation so that we also have an AI and data act going through legislation. As I said, these are the basis of the regulations and the policy world that we're working in within Canada. And what I comment there is that, these are all then developed by multiple departments.
Okay? So that is where I think we're challenging ourselves as policy makers. Because we have to also increase our capability to work across the sectors. Across the departments. And I would say from where we started with when we were developing Canada's directive on automated decision making through to the actual code of conduct that was just announced. That was moving from, you know, informal consultations across the country trying to engage with private sector and academia to the voluntary code being consulted.
We have national tables set up now. Which does include private sector, Civil Society, federal, provincial, territorial, governments, indigenous communities. So we've also had to make a journal through what it means from ad hoc consultation to formalized consultation when we're actually developing those these codes.
How does that translate internationally? As we're learning domestically at a rapid pace. Perhaps a few examples. And I'll hearken back to the UNESCO recommendations on the ethics of AI. From 2021. So this is where, again, it was making that conscious decision about harnessing our national tables that were in place to define our negotiating positions. When we would be going internationally. Given that, again, our national positions weren't as defined.
And then we also wanted to leverage the existing international structures.
I think that is really important as we talk about the plethora of international structures. This is where we've used the freedom online coalition. You have to look at the structures you have, the opportunities that exist and what are the means by which we can do wide consultation on the negotiating positions that we're taking.
So for the UNESCO recommendations that's where we use the Freedom Online Coalition. And they have an advisory network which also includes Civil Society and tech companies. So again it is about proactively seeking those opportunities, shaping your negotiating positions in a conscious way. And then bringing those to the table.
We're also involved in the Council of Europe negotiations on AI and human rights, which is again leveraging our tables. But it is also advocating to have a more diverse representation of countries at the table. So you have to seize the opportunity.
We do see this as an opportunity. To engage effectively in this negotiation. And we want to continue to advocate that more countries are participating. And that more stakeholder groups can engage.
So maybe I'll just finish by saying some of the lessons we've learned from this. It is really easy to recite that and make it sound like it was, you know, easy to do. It is not. Some of the lessons I would pull out.
Number one, stakeholder engagement requires a deliberate decision to integrate from the start. And it I guess the most important word in that is "deliberate ." You have to think from the beginning. You have to put that in place. As I've said a few times, you have to think about and make sure you are creating the space for the voices to be heard. And then actually following through on that.
The second one, it does take time. It is complex. And there will be tensions. And there should be tensions. Because if there is not tensions in the perspectives, then you probably haven't created a wide enough table of a diversity of voices.
So you have to, I think my team is probably tired of hearing this but you have to get comfortable with living this a zone of discomfort. The if you are not in a zone of discomfort you are probably not pushing your own policy, your view. And I'm coming from a policy perspective. And do you have that to find the best solutions.
As policy makers, it is going to also drive us to, sort of, increase our expertise. So we're seeing a lot of, you know, yes we would traditionally come to the table worse policy knowledge and human rights experience and those elements. But I think, you know, we've tried a lot of different things in terms of integrating expertise into our teams, integrating expertise into our consultations. So you have to sort of think about what it is going to mean in a policy world to now do this.
And finally, I'll just say, again, leveraging structures that are in place. We have to optimise what we have. It is I think sometimes easier to say well it is broken and let's create something new. But I do want to think that we can continue to optimise. And if we're going to create something new, it is a conscious decision to think about what is missing from what we have that needs to be improved on.
I'll stop there.
>> IAN BARBER: Thank you Tara. Great and comprehensive. In the beginning you alluded to the challenges applying the human rights system to the work you are doing but I'm Canada is very much doing that and taking this multi pronged approach that puts human rights front and centre. Both international and national levels. And I really agree that there is very much a need to have deliberate stakeholder engagement and appreciate the work you are doing on that. And also the need to leverage existing structures. And ensuring that these conversations are truly global, inclusive and ensuring the expertise is there as well. So thank you so much.
And I think your comments on UNESCO actually serve as a perfect segue to my next prompt. Which I'll be turning to Pratek to discuss a bit about that.
So UNESCO developed the recommendations on the ethics data couple years ago. I think its been alluded that a conversation has kind of gone from do we need voluntary things or self regulatory or non binding, to do we have perhaps need more binding and I think that is perhaps the direction of travel now but I'm curious to hear more about your experience in UNESCO in terms of implementing the recommendation at this point and how UNESCO in general will be playing a larger role in AI governance moving forward and non human rights. So thank you.
>> PRATEK SIBAL: Thanks Ian. How much time do I have?
>> IAN BARBER: You have 5 to 6 minutes but there is no rush. I want to hear your comments and intervention.
>> PRATEK SIBAL: First of all, thanks for organising this discussion on human rights‑based approaches. To AI governance.
I will perhaps focus more on the implementation part and share some really concrete examples of the work that we are doing with both rights and ‑‑ (?). Perhaps good to mention the recommendation on recommendations of AI is human rights based. Has human rights as a core value. And it is really informed by human rights.
Now, I would focus more on the judiciary first. So while we are talking about development of voluntary frameworks, non‑voluntary binding and so on. There is a separate discussion about whether it is even possible in this fractured world we're living in to have a binding instrument. It is very difficult. It is not a choice. If you were going to go and negotiate something today? It is very difficult to get a global view.
So we have a recommendation. Which is adopted by 193 countries. So that is an excellent place to start with. And I'm really looking forward to the work that colleagues at the Council of Europe are doing. To a regional and also they work with other countries.
Now so we started to also in my team in my files looking at the judiciary. Because you can already start working with duty bearers and implement international human rights law through the decisions. But the challenge you face is lot of times they don't have enough awareness of what AI is. How does it work? There is a lot of myth involved.
And there is also this assumption that technologies out there, it will, you know, if you are using an AI system for in lot of countries they are using for (?). Like oh yeah the computer which is giving this code, it must be right.
So all of these kind of things need to be broken down and explained. And then the relevant links with international human rights law needs to be established. But this is what we started to do in some time around 2020. We at UNESCO have an initiative called the global judges initiative. Started in 2013 where we were working on freedom of expression, access to information and safety of journalists. And through this work we reached about 35,000 judicial operators in 60 countries. Through both online trainings and massive open online courses to in‑person training, to helping national ‑‑ institutions develop curriculum.
We started to discuss artificial intelligence. The recommendation was under development and thinking already about how can we actually implement beyond the great agreement that we have amongst countries. And we first launched a survey to this network.
And about 1200 judicial operators. So judge, lawyer, prosecutors, people working in legal administration. Respond to this. And first we want to learn how AI can be used within the traditional processes in the administrative processes. Because in lot of countries, they are overworked and understaffed. And talking to judges and they are like, yeah if I take a holiday my colleagues have to work like 16 hours a day.
And that is a key driver for them to look at how can the work be streamlined.
The next aspect is really about what are the legal and human rights implications of AI. And when it comes to, say, freedom of expression. Safety of access to information. Let me give you some examples here. So we have, for instance, in Brazil, there is a case in the Sao Paulo metro system. They were using facial recognition system on their doors to detect your emotions and then show advertisement.
And so I think it was a data protection authority in Brazil said you can't do that. You have no permission to collect this data and so on. And this did not really require an AI framework.
So my point is that we should not think in just one direction that we have to work on a framework and then implement human rights. But we already have international human rights law. Which is part of jurisprudence in lot of countries. Which can be directly used actually. So let's not give other people the reason to bait. Let's have a regulation in our country.
Giving you some other examples.
We've seen in Italy, for instance, they have these food delivery apps. And they had two cases there where basically one of those apps ‑‑ I don't remember which one ‑‑ was penalizing the food delivery drivers if they were cancelling their scheduled deliveries for whatever reason and they were giving them a negative score.
So it was found to be biased. Rating those who cancelled, giving more negative points to them vis‑a‑vis the others. And the data protection authority basically said from the GDPR, that you cannot have this.
We had the case, marina was mentioning about facial recognition in the public sphere. I think it was the UK, south, where its police department was using facial recognition systems in the public sphere. And this went to the court of appeals. And then they said oh you can't do this.
So this is the work ‑‑ these are just examples of what is already happening. And how people have already applied international human rights standards and so on.
Now what are we doing next?
In our programme with work with the judiciary, we launch in 2022 a massive open online course on AI and rule of law. Which covers all these dimensions. And we made it available in seven languages. And kind of participative dialogue. We had president of inter ‑‑ core of human right. Chief justice of India. Professors. We had people from Civil Society coming and sharing their experiences from different parts of the world. Because everyone wants to learn in this domain. Like as Canada, you were mentioning, there is lot of scope to learn from other practises in other countries.
So that was a first product which reached about 4500 judicial operators in 138 countries.
Now we realise that doing individual capacity building is one thing. But we need to focus more on institutional capacity building. Because that is more sustainable in the long‑term. So we've now with also the support of the European Commission developed a global tool kit on AI and rule of law, which is essentially a curriculum. Which has (?) talking about human rights and act assessments that Marlena was talk about before. We are actually going to go to the judiciary and say okay, this is how you can break things down. This is how you look at data.
But what is the quality of data? When you are use an AI system, how do you cheque whether the algorithm is? What was the data used et cetera. So we are breaking these things down practically for them to start questioning, at least. You don't expect judges to become AI experts at all. But at least to have that mind set to say, oh, it is a computer. But it is not infallible. So we need to create that.
So we have this curriculum which we developed through almost a year‑long process now. Of reviews and so on. We have the pilot tool kit available, which we are implementing first with the (?) of human rights next month. And work with the community on what works for them. Also from the trainers.
We are going to hopefully do it for the EU. We are going to do it in East Africa with the the east African court of justice next year. In fact we're hosting a ‑‑ organising a conference with them later this month in Kigali.
So we are at this moment now piloting this work, organising this national and regional trainings with the judiciary and then as a next step hoping that this curriculum is picked up by the national judicial training institutions and. Then they own it, they shape it. They use it. And that is hue we see that it becomes now international human rights standards, percolate down to enhanced capacities through this kind of a programme. And also as app open invitation, the tool kit that we have a, we are just piloting it. So also open to having feedback from the human rights experts here on how we could further improve and strengthen it.
So, I think I'll ‑‑ perhaps I'll briefly just mention the right soldier side. And we've also developed some tools for ‑‑ for basically youth. Or even general public. You could say. To engage them in a more interesting way. So we have a comic strip on AI. Which is now available in English, French, Spanish, Swahili, and I think there is a language in Madagascar that is also. And in German and Slovenian.
So these are tools that we make available to the communities to also then co‑own, develop their language versions. Because part of strengthening human rights globally is also making that content available in different languages.
So people can associate with it better.
We have a course on defending human rights in the age of AI, which is available in 25 languages. It is a microlearning course on a mobile phone. Developed in a very collaborative way with UNITAR, which is a United Nations training and research institution. As well as a European project which involves youth volunteers. Want to take to it their communities and say oh, actually in our country we want this to be shared and so on.
So there are number of tools that we have. And then communities of practise with whom we work on capacity building and actually translating some of this. High‑level, principles, frameworks, policies into hopefully a few years down the line into judgements which become binding on governments and companies and so on.
I'll stop here. Thank you. Approximate.
>> IAN BARBER: Thank you. That's great and thank you for reminding us we already do have a lot of frameworks and tools that can be leveraged and taking plates in domestic context as well and commend your work on AI and human rights and judiciary. I think that it is important to consider that we do need to work on kind of the institutional knowledge capacity that you were speaking to and also working with the various stakeholders in inclusive manner so thank you.
.8.
At this point we've heard from Marlena at what is truly needed from human rights‑braced approach. AI governance. Tara what some governments and states like Canada are doing to champion this approach in some ways. Domestic and national levels. Pratek about the complimentary work done by international organisations and implementation and work happening there.
So I think I want to pause at this point to see in fin on the panel has any immediate reactions and then ‑‑. Feel free to jump in. If not that's okay too.
From online? If not...
So yeah we can also go to brief question if that's possible. Please feel free to jump in. I think there is a microphone there but we can also hand one over.
If you could introduce yourself, that would be great too. Thank you.
>> AUDIENCE: I'm Stephen foster, policy specialist at UNICEF. And it is really great to hear about the difference initiatives that are happening and different approaches.
And maybe it is natural, like in the previous session. Thomas Snyder was saying it is natural that we will see many different governments and countries approaching this differently. Because nobody really knows how to do this.
So this is more kind of a, I guess a request. To think about not just the what, the governance, but also the "how." And to do analysis of these different approaches and to see what works. From voluntary ‑‑ you know, voluntary codes of conduct to what kind of industry‑specific legislation.
And I think that is almost really the next phase as we go from policy to practise. And this will play out over a number of years. But that would really be helpful from the UNESCOs and the OECDs already starting to build up this knowledge base.
But clearly some things will work well and some things not. We did a policy for AI for children and engaged children in the process and it was a meaningful and necessary process that really, you know, informed and enriched the product. So it is really, you know, encouraging to hear about the multistakeholder approach that is ongoing, not just ad hoc.
But yeah that is kind of request and perhaps if you have thoughts how you see these approaches may play out if we look ahead and what kind of role organisations you are in might play. Not just kind of documenting and looking at how it may be ‑‑ what may be governed. But actually and how.
Thank you.
>> MARLENA WISNIAK: On your comment about need dog the analysis about what is working and what is not. I think one ‑‑ and again this is where we need to also build that capacity globally. One thing for Canada to do an analysis and maybe what is working in Canada. But we have to really understand you know what are the risks? How is it impacting in different communities? In different countries?
But we have the international research centre, IDRC, and they do lot of the funding and capacity building in different nodes around the world and specific on AI capacity building and research. So that is where we've also had to really, you know, link up. So that we could be leveraging as fast as possible the research that they are also supporting.
So again, it is just always, again, it is challenging ourselves as policy makers that we have to keep seeking it out. But there is that research and we just need more of it. I just want today advocate for that. Thank you.
>> TARA DENHAM: Thank you for the question. Definitely support multistakeholder and engaging stakeholders in the process of policy making itself. One challenge we see a lot is that there is no level playing field between the different stakeholders. So I don't know if there are many companies in the room. But we often see, you know, companies have disproportionate advantage, I'd say. Financial and access to policy makers. When I mentioned the beginning of my intervention there is a handful of human rights folks that participate in AI governance, it really is an understatement comparing to hundreds of, actually thousands of folks in the public policy sector, or section of companies. So that is something that I would urge international organisations and policy makers at the national level to consider. Then Civil Society really comes from ‑‑ it is an uphill battle in terms of capacity, resources, financial. And obviously these are in marginalized groups and global majority‑based orgs are disproportionately hit by that.
Canada, as a Canadian, as Canadian government, I imagine you primarily engaged with national stakeholders. Which is obviously important. And I also encourage you to think how Canadian laws can influence, for example, global majority‑based regulation. That is something we think about a lot in fact EU with the so called "Brussels effect." Understanding many countries around the world especially those with more repressive regimes or authoritarian practises not have the same practises as for example the EU would have. So add nuance to stakeholders and yes in way that includes meaningful participation and inclusion of all.
Thank you.
>> PRATEK SIBAL: First on Canada I think they are doing fantastic job on Latin America and ‑‑ AI for development project and have seen since 2019 kind of communities that have come up and been supported to develop say a language datasets, which can then lead to development of applications. Or in health care or in agriculture or just to strengthen in a more sustained way capacities of Civil Society organisations that can inform decision making and policy making. And we at UNESCO have also benefited from this.
When we have the recommendation of the ethics of AI which is being implemented now in lot of countries. We work in multistakeholder manner. Right? We generally have a national multistakeholder group, which convenes and works. And the capacity of Civil Society organisations to actually analyse national context contribute with these discussions is very important. So the work that Canada or IDRC and so on are doing is actually, I have over the past 4 or 5 years seen result of that in the my work already. So there's good credit due there.
On your point about policy making international level and recommendations and so on. I think so the process of international standard and policy making has kind of evolved over the years. Like, we used to be in mode of technical assistance, many years ago. That someone will go to a country and help them develop a policy. An expert will fly in, stay there for some months and work.
I think that model is changing. And that model is changing in the sense that you are developing policies or frameworks I would say at the global level. With the stakeholders from the national or whatever level involved in the development of these frameworks.
So what happens is that when they are developing something at the global level, and when they have to translate it at the national level, they would naturally go towards this framework on which they have worked and they have great knowledge of. And that is one ‑‑ it is an implicit way of policy development which is over the few years ‑‑ not few. Is been actually since the early 2000s this is the model. Because otherwise there is not enough funding available. And also it is not sustainable because you don't develop global frameworks which are done in more institutive manner.
So ownership of these frameworks which become the national go‑to tool for the national level as well.
So that is interesting way to develop. And that is why we're talk about multistakeholderism. Lot of times in fora like this, multistakeholderism just backs a buzz word. Yes we should have everyone on the table. That is not what it means. You need to be ‑‑ and have actually produced a guidance on how to develop AI policies from drafting, from agenda‑setting to drafting tool implementation and monitoring along the policy cycle in multistakeholder manner. And there is a short video as well I'm happy to share later.
>> IAN BARBER: Thank you very much. I know we have one speaker. Just really quickly, if you can makes your question and then I have three more interventions from people including online. So maybe they can consider your question and their response. If not we can come back to it at the end. I just want to make sure we make time for them. So if you can be brief, would be very much appreciated.
>> AUDIENCE: So I can ask a question? Thank you so much. I'm working on engaging tact for internet freedoms on Asian countries. Myanmar, Vietnam, China. And my question actually S I think it is like more of a UNESCO and Canada at some point. Because I mean, the ones who are providing some global policies.
Would you recommend some mechanisms which would we could implement in authoritarian regime countries to monitor the responsible AI? Especially from the private sector side.
Because in the western world or the world which is more like human rights friendlies it is more easy to implement those policies, rather than in authoritarian countries. Thank you.
>> IAN BARBER: Thank you. We'll be coming to those questions as well. And I think it is a good segue to the next intervention. Shahla. If you are connected with us online, I think my question for you aside from the government and multi lateral efforts it is obviously clear the private sector plays a key role in promoting human rights and AI governance frameworks. So you could speak about really your work at Google, what is it is perspective and how your work promotes human rights and if you can speak to the questions that's been asked, fantastic. And thank you for joining and your patient.
>> NAIMI SHAHLA: Sure. Thank you for having me today apologies I was unable to join in person. I'll try to keep this brief I want to make sure we get to a more dynamic set of questions and. But the take a step back. A Google human rights ‑‑ central function responsible for ensuring we're upholding human rights commitments and I can share more on that later. But it really applies across all the companies products, and services across all of regions. And so this includes overseeing the strategy on human rights, advising product teams on the potential actual human rights impacts. Quite relevant to this discussion is conducting human rights due diligence and engaging external experts, rights holders, stakeholders, et cetera.
So maybe just to take a brief step back. I'll just share a little bit of our starting point as a company. True excitement about the ways AI can advance the rights and create opportunities for people across the globe and so. You know I think that doesn't just mean sort of in terms of potential advancements but really progress that we're all already seeing putting more information in the hands of human rights defenders in whatever country they are in. Keeping people safer from floods and fires particularly. Knowing that it, you know, affects disproportionately the global majority.
Increasing access to health care. One that I'm particularly excited about is something we call our 1,000 languages initiatives. Which is really working on building AI models that support the, you know, 1,000 most widely spoken languages. We obviously live in a world where there are over 7,000 languages. So I think it is a drop in the bucket. But we hope that it is sort of a useful starting point.
But to sort of, you know, again turn to our topic at hand, none of this is possible is AI is not developed responsibly. And noted in the introduction, this really is an effort that necessarily needs to have government Civil Society organisations and private sector involved in a really deeply collaborative process. Maybe one we haven't even seen before potentially.
For as a company starting point for responsible AI development deployment is human rights. For those maybe less familiar with the work we do in these space, Google had made a number of commitments to respecting the rights enshrined think the Universal Declaration of Human Rights which is 75 this year and implementing treaties and guiding principles on human rights.
What does that actually look like in practise? Soen as part of this years ago in 2018 when we established our AI principles we embedded human rights in them. AI principles describe our objectives to develop technology responsibly. But also outlines specific application areas we will not pursue and a includes technologies ‑‑. So if I'm providing a bit of a tangible example its been ‑‑ thinking of developing a new product like (?) which we released earlier this year. As part of that process my team also conduct human rights due diligence to identify any potential harms and develop alongside various legal teams in particular appropriate mitigations around them. So one example of this which we can sort of share around which is a public case study we we released is celebrity recognition API. In 2019 we already saw that streaming era brought a new remarkable explosion of video content and many ways that was fantastic. More documentary, more access for film makers to sort of showcase and share work globally. But also a big challenge. Video was pretty much unsearchable without extensive tagging processes. This made it really difficult and expensive for creators. So discussion popped up about better image and video capabilities to recognize sort of an international roster of celebrities as a starting point.
So our AI prince would review in this process triggered kind of additional human rights due diligence and we brought on business through social responsibility DSR, which some are familiar with, to help us conduct a formal human rights assessment on potential impact of tool like this on human rights. Fast forward. The outcome was a very tightly scoped offering. One that took celebrity quite carefully. Established manual customer review processes, instituted, expanded terms of service. Also later informing companywide stance on facial recognition. And took into consideration quite a bit of stakeholder engagement in the process. Developed more recently than this particular human rights assessment. I'll also plug in ‑‑.
Share this example. For two reasons. One is just human rights and sort of established way of assessing impact on human rights have been embedded be our international ‑‑ into our internal AI governance processes from the beginning. And two, as a result of that we've actually been doing human rights due diligence on AI‑related products and features for years. That's been a priority for us as a company for quite a long time.
To sort of take a very brief kind of note to sort of your second part of your question. I'll just sort of flag that I think we really do need everybody at the table and that is not always the case right now. As others have mentioned. We were excited just as an example to be part of the moment at the White House over the summer, the U.S. White House over the summer that brought together industry commit to practising responsible practises in development of AI. And earlier this fall we did sort of release our company's progress against the commitments. And that included launching a beta of, a new tool for water marking and identifying AI‑generated images of really core component of informing development of that product was concerns from Civil Society, organisations and academics and individuals and global majority keeping in mind that we have 75 elections happening globally next year really concerns around misinformation and proliferation of misinformation.
Establishing and dedicated AI red team. Co‑establishing the frontier model for sort of develop standards and bench marks for emerging safety issues. But we're, you know, we think these commitments and companies progress against them is an important step in the ecosystem of governance but they really are just a step.
So we're particularly eager to see more space for industry to come together with governments and civil society organisations. More conversations like this. I think Tara mentioned the Freedom Online Coalition. So could be through existing space likes FOC or global network (?) but also potential new spaces as we find it is necessary.
So I'll just kind of mention one thing last thing briefly. I'm probably over my time. Because I did sort of come up. More specifically. I'll just flag that when developing the AI regulation, at Google the very at least we sort of think about it in a few ways. Something as the four Ss. The structure of the regulation, is it international, is it domestic, is it vertical, is it horizontal, the scope of the regulation. How is AI being defined, which is not the easiest thing to do in the world. The subjects of regulation developers, deployers. And finally the standards of the regulations. What risks? How do we consider those difficult tradeoffs that were mentioned earlier by some ‑‑ by I think the ‑‑ the first question.
So these are just some of the things that we're taking into consideration as process. But we're really hoping that more multistakeholder conversations will lead to some coordination on this front. Because our concern is that otherwise we'll have bit of a hodgepodge of regulation around the world and the worst case scenario, I think it makes it difficult for companies to comply and stifles innovation. Potentially cuts off populations from what could be potentially transformative technology.
It might not be such many the case for us at Google where we have the resources to make significant investments in compliance and regional expertise. But we do think it would be, could be a potential issue for ‑‑ for smaller players and sort of future players in the space.
So I'll pause there. Because I think I probably took up too much time. But I appreciate it. And looking forward to the Q&A.
>> IAN BARBER: Thank you for that overview. It was great and thank you for highlighting the work that is happening at Google to support human rights in this context. Particularly your work on due diligence for example. As well as you noting the need for collaboration and considering global majority perspectives. I think that is key as well.
What I'd like to do now is turn to our second to last ‑‑ and couple questions a the end. I think we've heard from couple stakeholders at this point. But I think the question for you is, do you think that the global majority is able to engage in these processes? Do you think that they are able to effectively shape the conversations that are happening at this point?
And I think that, you know, that I spoke about the need to consider local perspectives and curious to hear from you as why is it so critical? And work you are doing now and if we could keep to it four or five minutes that would be fine but don't want to cut you off. But thank you.
>> Thank you for the question. Latin American digital rights organisation. And for last couple of years we've been researching the region in context of public policy. Part of that work has been founded by ‑‑ so thank you.
And tell you more about that later. If you are interested. You can go to IA.(?). And come to me and I give you one of these and you can find it more easily.
Regarding your question. Even though there are interesting efforts being developed right now I think Latin America mostly have lacked the ability to meaningful engage and shape processes for responsible AI governance. And this is consequence of difficult challenges faced by Latin American region and local, the regional, the global context.
For example, in the local context, one of the main challenges has do with the designing of governance, instances that are inclusive. And that can reach meaningfully with wide range of actors, which is at least partly consequence of long history of authoritarianism that results on (?) suspicious of participation. Suspicious of human rights impacts or lack necessary institutional capacities to implement solution that (?) inclusive transparent participation.
On the global context we have to address the eagerness of the tech industry for putting aggressively a technology still not completely mature. In terms of understanding it, how we think about its limitations and how do we digitalize it. And one of the consequence of this is the proliferation of different proposal for guidance. Legal, ethical and moral.
So many that it is hard to keep up. So there is a sense of overwhelming necessity and (?) which is difficulty on itself. Also from in the global context, I think Latin America and global majority perspectives are often overlooked in this regard in the international debate about technology governance. Probably because from the technical or an engineering standpoint, the number of artificial intelligence systems that are being developed in Latin America might seem marginal. Which is true, especially when compared to those created in North America, Europe and part of Asia.
Better understanding of global majority and Latin American relationship with AI can be illuminating. Not just for Latin America but for AI governance as a whole.
How should it look and what should it include?
First I think it is important to consider the different roles of countries and in particular Latin American countries in the global chain of artificial intelligence development. ‑‑ non renewable energy and important environmental ‑‑ sorry. Environmental impacts. Including air pollution and weather contamination that can lead to destruction of habitats and loss of biodiversity.
Also severe impacts on health of miners, many working in conditions. And providing raw data collected by different sources by different means and used to train and refine AI models. Data that is often collected as a consequence of lack of proper protection of people's information. Most of the time without people's consent or their knowledge.
Also provides labour. Necessary to train AI systems by labelling data for machine learning. Usually low‑paid jobs performed under precarious conditions that can have harmful impacts on emotional and mental health of people. For example when reviewing data for certain purposes.
It is also the foundation of any AI system. Underestimated and not properly compensated. In summary, Latin America provides material resources necessary for the development of AI systems that are being designed somewhere else and later sold back to us and deployed in our countries perpetuating dependency and exattractivism. We're both providers of the input and paying clients for the output. But the processes that determine AI governance are often far removed from (?)
Responsible AI governance to consider impact on human rights including ones extraction of these material resources. Environmental human rights, workers rights and the right to data protection, privacy and autonomy which are greatly impact regions like Latin American.
We have been looking into different implementations of AI system through public policy. Main way most people interact with these technology is with the state. Even if not always aware of this. And seems like states are using AI for mediating the relationship with citizens for surveilling purposes, for making decisions regarding welfare assistance and for controlling access and use of welfare programmes. However most of the time research shows these technologies are deployed without meeting transparency or participation standards. They lack human right approaches. And do not consider open, transparent and participatory evaluation processes.
There are many reason for this. From corrupt to lack of capacity. And disregard for human right impacts as I mentioned earlier. But we need to go economic reality and strengthening of democratic institutions. International concentration is key and regions playing a major role promoting change.
I'll keep it here for now.
Thank you.
>>> Thank you for speaking about the need for regional perspectives and highlighting how these need to it into into global conversations and ‑‑. Really helpful.
I'm go doing turn our last speaker now. Oleseyi. Around 5:00 a.m. Definitely deserves round of applause. Question is build on the previous comments, how do we ensure that similarly African voices are represented in efforts on responsible AI governance and promotion human rights and I'm going weave a question from online as well which might be related if you are able to respond to that is a well.
What suggestions can be given to African countries as they prepare strategies or policies on emerging technologies such as AI specifically considering risks and benefits?
Thank you so much for your patience and thank you for being with us. Cheers.
>> OLESEYI OYEBISI: Thank you. And thank you for inviting me to speak this morning.
I think in terms of African voices, we all would agree that the African region is coming late to the party at this time. And we now need to find a way of peer pressuring the continent to get into the debate.
Doing this would mean that we are also doing ourselves, as other regions a favour. Understanding that a continent has a very huge population. And that human rights abuses on the continent itself would also snowball into, umm, developmental challenges that we do not want first of all. So this is a context to insure we're not leaving the African continent behind. And this will speak to the question that's been asked by that colleague.
Our governments have not prioritised the governance of AI. Of course we need the think of the governance of AI within the heart and the (?). But also understanding the life cycle of the AI itself. And how do we ensure that along all of the life cycle we have a government that understands that Civil Society as an organisation as well understand and business understands and it was great listening to the colleague from Google talking about how Google has human rights programme. How do we then within a more approach bring that understanding to anticipate some of the rights challenges we might see with some artificial intelligence. But also then plan as (?) our approach to be able to mitigate those.
And this is where governments would now see Civil Society organisations not as enemies but as allies and helping to bring those voices together. Of course we should understand at some point the politics of AI would also come to bear. Because, you know, on the continent itself, we do not have all of the resources in terms of intellectual capacity being able to, you know, develop the coding and all of these algorithms that follows that. Our investors are not prepared for that yet. But again dealing with the technicalities as well we also have to build some level of competence.
Plus also understanding that in terms of international governance of AI and setting up of international bodies, the African region would have to ensure that, you know, our missions are brought, especially those relating with the UN. Most of the right capacity to take back the negotiations. And that is why again, I like how the colleague from Canada said well will have these contestations and it is very necessary. Because it is within these that we'll be able to bring the diversity of opinions to the table such that we have policies that can help us to address some of these challenges that we may see now in and in the future.
How are we going prepare ourselves as Africans to be able to negotiate and negotiate better?
And this speaks to the role of the African. I do think European union is also setting the agenda and the kind of model for African and other regions to also follow in terms of a deep dive that with the AI treaty. And how they are using that to help shape how we can have a good human rights approach to AI itself.
So now answering the question directly, you know, that you post to me. To say that whatever advice we would give the African government would be also be within the ‑‑ (?). One is understand hard laws may not necessarily be the starting point for African government. Might be soft laws. Working with technology platforms to look at code of conducts and using lessons from that to progress to laws. Also understanding our government must begin to think regulation in ways that balances the need of citizens and some of the disadvantages that you do not see. We bring citizens themselveses into the conversation. Such that we are also encouraging innovation, as much as encouraging innovation, also ensuring the rights of others are not abused.
It is going to be long walk to freedom. However that journey will start with Africans, African Civil Society, African businesses, domains investing in the right set of meetings. Invest in the right set of research. As the right set of engagements that can get us to again to be part of the global conversation. But also understanding that the regional elements of other conversations also must be taken on board.
Especially, given the fact that human right abuses across the region is becoming alarming. And that we now have more governments that are interested in not opening the space. Not, you know, being inclusive of the rather you know the want to muffle voices, you know. They also are not opening (?). A look at the specific space ranking for the region itself. Then again gives picture how someway, somehow these conversations might not necessarily be something that would excite the region but again this is assumption. We can still again begin to look for (?). And brings African government to table. In ways that helps them to see the need for this. And also the need for us to get our voices into global platforms.
>> IAN BARBER: Thank you Oleseyi. The need for Civil Society and government to work together and bringing in diversity of perspectives and African voices and governments to the table. Which requires preparation as well.
So thank you.
I guess to the organisers in the IGF. I'm not sure what the timing is in terms of whether we'll be kicked out of room or not. If there is a session immediately afterwards? I'm not entirely certain but I don't see anyone cutting me off. I think it is a lunch break. I'm just say some brief final comment. And in anyone has any particular questions or wants to come up to the speakers, that might be more help way of moving forward. I don't want to stand between people and their food.
>> PRATEK SIBAL: Question from ‑‑ I have no answer but think think it is important question. Tricky when dealing with authoritarian regimes and put if frameworks used whatever way possible. I have no answer but I think it is important question. So we should give some time to that.
>> IAN BARBER: Thank you. I just want to say I think we began with session with crucial acknowledge there are truly glaring gaps between what is existing and discourse between human rights and AI governance and it is really key for all stakeholders to come in for global perspectives, from the industry, Civil Society, governments and other examples on the issues. We've just begun to shine a spotlight. And journeyed through what is really needed in terms of human rights approach to AI governance. One piece of the pie but critical. And it is just key that we continue to firmly root all efforts on AI governance in the international human rights framework.
Thank you so much for speakers in person here and thank you for your patience and apologies for going over. And apologies for not being able to answer ule the questions but encourage you come up personally and speakth speakers yourself. Thank you.
(End session.)