Session
Organizer 1: Paola Galvez, IdonIA Lab
Organizer 2: Ananda Gautam, Youth IGF Nepal
Organizer 3: Matilda Mashauri, University of Dar es salaam
Organizer 4: Aaron Promise Mbah, Tlit Innovation Lab
Speaker 1: Paola Galvez, Civil Society, Latin American and Caribbean Group (GRULAC)
Speaker 2: Yonah Welker, Civil Society, Eastern European Group
Speaker 3: Abeer Alsumait, Government, Asia-Pacific Group
Speaker 4: Lopez Monica, Private Sector, Western European and Others Group (WEOG)
Ananda Gautam, Civil Society, Asia-Pacific Group
Matilda Mashauri, Government, African Group
Aaron Promise Mbah, Private Sector, African Group
Roundtable
Duration (minutes): 60
Format description: A round table format encourages active participation and dialogue among speakers and participants. The setting fosters a collaborative exchange of ideas and 90 minutes is enough time to cover key topics and engage in substantive discussion, while also allowing for flexibility to adapt to the flow of conversation and address emerging issues as they arise. It allows for a balance between the panel discussion which will take 50 minutes and an interactive 40 minutes Q&A session.
1. How can we ensure algorithmic decision-making processes are transparent and accountable, particularly in relation to their impact on marginalized communities? 2. What measures can be implemented to foster the development and deployment of disability-centered algorithms that prioritize accessibility and inclusion for persons with disabilities? 3. In what ways can stakeholders collaborate to address biases and discrimination embedded within algorithms, while promoting diversity and equity in digital spaces?
What will participants gain from attending this session? Participants will gain a deep understanding of how algorithms impact human rights and inclusion in the digital age, learning about the ways in which algorithmic decision-making processes can perpetuate social exclusion, discrimination, and inequalities. They will become aware of the risks associated with algorithmic bias and exclusion and the importance of addressing these issues to uphold human rights and create an equitable digital environment. Participants will learn best practices for promoting algorithmic transparency, accountability, and inclusivity, as well as approaches for designing algorithms that prioritize human rights and equity. Additionally, they will have networking opportunities to connect with colleagues who share an interest in advancing human rights and inclusion in the digital age. Ultimately, participants will come away with actionable recommendations for promoting digital inclusion and addressing algorithmic bias in policy development, industry practices, and civil society initiatives.
Description:
The session addresses the problems of algorithmic bias and exclusion impacting human rights in the digital age. As algorithms increasingly play a pivotal role in shaping various aspects of our lives, from employment opportunities to access to information, there is growing concern that these algorithms may perpetuate and exacerbate social inequalities and discrimination. At the heart of this issue lies the concept of algorithmic bias, which refers to the systematic and unfair treatment of certain groups or individuals based on race, gender, socioeconomic status, disability, or other protected characteristics. Algorithmic bias can manifest in various forms, including unequal access to healthcare or financial services, disparities in search engine results, and discriminatory targeting in advertising, among others. In this context, the session recognizes the importance of a human-centered approach to algorithmic development and deployment, one that prioritizes human rights, equity, and inclusion. The session aims to state that by placing human rights at the centre of algorithmic design and implementation, it is possible to mitigate the risks of bias and discrimination and ensure that algorithms serve the needs of all individuals, regardless of their background or circumstances. Many algorithms operate as "black boxes" with their inner workings hidden from scrutiny, making it difficult to identify and rectify instances of bias or exclusion. Without transparency, individuals affected by algorithmic decisions may be left without recourse or understanding of why they are treated unfairly. Hence, drawing from the speakers' extensive experience, the session explores concrete strategies and best practices for advancing human rights and inclusion in the digital age through algorithmic transparency, accountability, and inclusivity. The session fosters dialogue among stakeholders from diverse backgrounds and generates actionable recommendations for advancing human-centered algorithms and promoting digital equity and inclusion.
1. Increased awareness among stakeholders about the ethical and social implications of algorithmic decision-making. 2. Identification of concrete strategies and best practices for promoting human rights and inclustion through algorithmic transparency and accountability. 3. Development of actionable recommendations for designing and implementing disability-centered algorithms to enhance digital accessibility and inclusion. 4. Establishment of a network of stakeholders committed to advancing human-centered algorithms and promoting digital equity and inclusion. 5. Creation of a roadmap for ongoing collaboration and dialogue to address emerging challenges and opportunities in the field of algorithmic governance.
Hybrid Format: To ensure seamless interaction between onsite and online participants, we will leverage a combination of technology and facilitation techniques. The onsite moderator will ensure that onsite and online participants have equal opportunities to participate and ask questions. We will incorporate live polling facilitated by our online moderator to encourage active participation from both onsite and online participants. Online attendees can submit questions and participate in polls in real time, while onsite participants can also engage with these interactive elements using their mobile devices. Leveraging the active engagement of our speakers in social media (i.e. Yonah Welker has 28K followers on LinkedIn), we plan to create a session-specific hashtag such as #AI4HRATIGF with which participants can share insights and connect with each other before, during, and after the session.