Session
Organizer 1: Nidhi Singh, 🔒
Organizer 2: Tejaswita Kharel, Centre for Communication Governance at National Law University Delhi
Organizer 3: Joanne D'Cunha, Centre for Communication Governance
Speaker 1: Jason Grant Allen, Civil Society, Asia-Pacific Group
Speaker 2: Tejaswita Kharel, Civil Society, Asia-Pacific Group
Speaker 3: Yik Chan Chin, Civil Society, Asia-Pacific Group
Nidhi Singh, Civil Society, Asia-Pacific Group
Tejaswita Kharel, Civil Society, Asia-Pacific Group
Srija Naskar, Civil Society, Asia-Pacific Group
Roundtable
Duration (minutes): 60
Format description: The workshop based roundtable layout would be ideal for our session as it would foster collaborative discussion. This format allows for interaction and collaboration between participants which will allow for diverse perspectives to be shared and explored effectively. We plan for the session to be information and collaborative. The session will begin with a 5-minute introduction, followed by two 10-minute sections on (i) overview of fairness in AI, and (ii) Fairness in Singapore and India. This will be followed by a 15-minute simulation exercise on bias and discrimination in AI models. We will then conduct a 15-minute open discussion by participants on fairness metrics in their respective jurisdictions. The session will end with a 5-minute reflection by speakers, and some closing remarks. The roundtable layout would be especially useful for the open discussion in the session, allowing participants to collaboratively craft a fairness metric contextualised to their context.
A. How can we make AI fair? What does the principle of fairness entail in the governance of AI? B. Are existing metrics of AI fairness adequate to account for the unique issues that arise in a socio-culturally diverse region like Asia? C. What are the metrics across which fairness is measured? How can we make these metrics more representative of Asian contexts?
What will participants gain from attending this session? In this session, participants will learn what fairness entails, the need for context-specific fairness metrics, and how to identify biases and discrimination in AI models. By the end of the session, participants will be able to identify bespoke metrics to evaluate fairness in AI in their socio-cultural contexts. Ensuring fair and ethical AI, contextualised to socio-cultural nuances would enable a more permissive environment for innovation. Through discussions on contextualised ethical governance of emerging technologies, participants will be better equipped to contribute to future dialogue surrounding governance of emerging technologies. These conversations will work towards balancing risks which the use of AI systems present, hence fostering a more safe, secure, and trustworthy attitude towards use of AI systems. This session will highlight to the participants that the principles and ethics guiding the governance of emerging technologies are subjective in nature and must be tailored to specific regional, social, and local contexts.
Description:
This collaborative workshop session (roundtable format) will focus on the principle of fairness in AI, emphasising the need for context-specific fairness metrics for ethical governance. We will unpack the multifaceted concept of fairness in AI by discussing key components of the principle of fairness (equality, bias and non-discrimination, inclusivity, and reliability). While these components are relevant globally, their interpretation varies across jurisdictions. For example, unlike western liberal democracies, factors such as caste or religion are key aspects of non-discrimination in India. Understanding these components is essential for developing and deploying AI systems that are safe, secure, and trustworthy. As the concept of fairness in AI has often been developed focusing primarily on the US and Europe, it may be difficult to adopt them in Asian countries which have unique socio-cultural contexts and may interpret fairness differently. We will discuss fairness in India and Singapore to showcase how the concept varies across Asia, and from the broader global concept of fairness. Further, we will discuss case studies like the biassed AI job recommendation system in Indonesia to illustrate the complexities of fairness in AI. We will also conduct a simulation exercise using a hypothetical model to illustrate the potential for bias to manifest through data points such as age, gender, address, etc. Finally, we will conduct an open discussion to gain perspectives from participants on fairness metrics in their own countries and to analyse how fairness as a concept differs based on their socio-cultural contexts. This session will leverage learning from an Asia-level dialogue conducted by SMU and CCG which brought together diverse stakeholders from the APAC region to discuss the multifaceted concept of fairness in AI. We will also have a speaker from UNESCO who has experience with AI norms and can speak to global perspectives on Fairness in AI.
We will use the session as a platform to introduce the idea of contextualising ethical governance of technologies by discussing subjective metrics that guide the principle of fairness in AI. The dialogue will help reframe conversations around fairness with an Asian perspective. We will use the insights we have gathered about contextualising AI towards our work on Ethical AI. As an academic legal research centre which focuses on Global Majority and Asia-Pacific perspectives in technology policy. The workshop will bring together organisations/researchers to form a community to build momentum towards sharing the Global majority and Asian perspective in the global norms development processes. The session will also create additional pathways for research and collaboration between stakeholders in the ecosystem. Discussing the learnings and key takeaways from the session we will upload a post on the CCG Blog and produce a podcast on the CCG Tech Podcast as well.
Hybrid Format: This workshop session includes a case study discussion (eg. biassed AI job recommendation system in Indonesia), a simulation exercise illustrating bias in AI, and an open discussion between participants encouraging discussion on their perspectives of fairness metrics. Through these interactive sections in the sessions, we aim to engage and interact with participants, both online and offline. In order to facilitate seamless interaction, we will have both an onsite and an online moderator who will moderate the session to ensure equitable participation. We will also ensure that all participants are provided with the opportunity to ask questions and clarifications both during the session and afterwards. Further, in order to engage with the online and onsite participants, we will use interactive features such as mentimeter, white-board, polling, etc. With these, we will be able to conduct an inclusive and accessible session with active and meaningful participation for all participants and speakers.
Report
AI regulation cannot follow a one size fits all approach, it needs to be tailored to the context of each jurisdiction to allow them to benefit from the use of AI.
The concept of fairness is a social, legal and cultural phenomenon and must be looked at through this lens. Application of AI must not promote bias.
Norms formation in AI is inherently tipped in the favour of the Global North, countries and organisations must work to specifically bring in Global South perspectives, especially since the Global South is usually the consumer of technology.
Create frameworks for meaningful inclusivity of the Global South. This includes having better and more representative data sets, but also in the process of design and audit of AI systems. We must also support a research agenda to investigate and enhance understanding of the effects of AI adoption in the welfare sector in Global South economies.
Nations in the Global South should establish platforms that facilitate dialogue and collaboration to develop shared frameworks and strategies for AI governance, tailored to address their common challenges effectively.
The session had three speakers and one in-person moderator. The speakers included Tejaswita Kharel (online), Yik Chan Chin (in-person), and Milton Mueller (in-person) and was moderated by Nidhi Singh. The speaker profiles are as follows
- Tejaswita Kharel is a Project Officer at the Centre for Communication Governance at the National Law University Delhi, India.
- Yik Chan Chin is an Associate Professor at Beijing Normal University, China.
- Milton Mueller is a Professor at the Georgia Institute of Technology School of Public Policy, USA and is one of the founders of the Internet Governance Project.
The session, which garnered approximately 20 participants including in-person and online participants, and examined the critical intersection of artificial intelligence governance and cultural contextualization, with particular emphasis on Asian perspectives.
Tejaswita began by speaking about the current conception of AI fairness in India. The discourse established that within the Indian framework, fairness encompasses three fundamental dimensions: equality, non-discrimination, and inclusivity. The equality principle derives from constitutional imperatives, mandating equitable treatment under comparable circumstances and advocating for uniform technological accessibility. The non-discrimination component addresses the technical implications of algorithmic bias, specifically focusing on preventing the amplification of historical and societal divisions across various parameters including religion, caste, and gender. The inclusivity dimension emphasizes universal access to AI services and benefits, incorporating accessible grievance redressal mechanisms.
The next speaker Yik Chin Chan spoke about her experience studying global ethics of AI as a member of the PNAI as well. She talked about the global consensus regarding core digital ethics values and identified three distinct fairness narratives: role adequacy, material equality, and formal equality. A comparative analysis of Chinese and Silicon Valley perspectives illuminated contrasting ethical frameworks. The Chinese approach emphasized harmony and tradition, with a notable evolution from prosperity-centric to risk-aware technological perspectives. Conversely, the Silicon Valley paradigm prioritized consequentialism and formal equality, predominantly viewing technological advancement as an opportunity rather than a potential threat.
Prof. Mueller brought in his practical experiences to talk about the feasibility of cultural contextualization in AI systems, highlighting historical precedents in computing challenges, such as the predominance of Roman alphabet-based keyboard designs and the limited adoption of multilingual domain names. He also talked about Georgia Tech's research findings regarding Arabic context in AI outputs that led to the development of the CAMEL (Cultural Appropriateness Measure Set for LLMs) framework, representing a significant advancement in cultural appropriateness assessment.
The discussion acknowledged substantial disparities between ethical aspirations and practical implementation in AI governance, noting regional variations in priorities. For instance, China's emphasis on consumer rights and antitrust measures contrasts with other regional approaches. Critical concerns regarding hyper-contextualization, discrimination in practical applications, gender representation in AI development, and challenges in fairness metrics were extensively examined.
The session concluded by emphasizing the imperative for continued research and development in addressing AI governance challenges, particularly focusing on developing culturally relevant AI models, implementing context-sensitive assessment methodologies, and maintaining sustained dialogue regarding regional variations in AI fairness conceptualization. The panel recognized time constraints as a limiting factor in exploring these complex issues comprehensively, suggesting extended sessions for future discussions.
These insights underscore the necessity for a nuanced, culturally informed approach to AI governance that acknowledges and incorporates diverse global perspectives while addressing practical implementation challenges in various cultural and societal contexts.