Check-in and access this session from the IGF Schedule.

IGF 2019 OF #13 Human Rights & AI Wrongs: Who Is Responsible?

    Subtheme
    Description

    The impact of artificial intelligence (AI) on human rights and the viability of our democratic processes has become starkly visible during the Cambridge Analytica scandal and is increasingly debated since. Countries committed to protecting human rights must ensure that those who benefit from developing and deploying digital technologies and AI are effectively held responsible for their risks and consequences. Effective and legitimate mechanisms are needed that will operate to prevent and forestall violations of human rights and to promote an enabling socio-economic- environment in which human rights and the rule of law are anchored. Only legitimate mechanisms ensure that we can properly, sustainably and collectively reap the many benefits of AI. This open forum addresses the following questions: Who bears responsibility for the adverse consequences of advanced digital technologies, such as AI? How can we address the ‘control problem’ that flows from the capacity of AI-driven systems to operate more or less autonomously from their creators? What consequences stem from the fact that most data processing infrastructures are in private hands? What are the effects of the increasing dependence of public services on few, very large private actors? The open forum will discuss the respective obligations of states and responsibilities for private actors regarding the protection and promotion of human rights and fundamental freedoms in the context of AI and machine learning systems. It will also explore a range of different ‘responsibility models’ that could be adopted to govern the allocation of responsibility for different kinds of adverse impacts arising from the operation of AI systems. As background resources, the debate will build on the Council of Europe study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework and on the draft Recommendation of the Committee of Ministers to member States on the human rights impacts of algorithmic systems, available at: https://www.coe.int/en/web/freedom-expression/msi-aut

    Organizers

    Council of Europe
    Council of Europe (CoE)
    EU Agency for Fundamental Rights (FRA)

    Speakers

    Keynote scene setting and moderation:  Jan Kleijssen, Director, Information Society – Action against Crime, Council of Europe

    Speakers: 

    - Joe McNamee, member of the Council of Europe Committee of experts on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT)
    - David Reichel, Social Research - Research & Data Unit, FRA
    - Cornelia Kutterer, Senior Director, EU Government Affairs, Privacy and Digital Policies, Microsoft
    - Clara Neppel, Senior Director, European Business Operations, IEEE

    Online Moderator

    Peter Kimpian

    SDGs

    GOAL 16: Peace, Justice and Strong Institutions

    1. Key Policy Questions and Expectations
    The open forum will discuss the respective obligations of states and responsibilities for private actors regarding the protection and promotion of human rights and fundamental freedoms in the context of AI and machine learning systems. It will also explore a range of different ‘responsibility models’ that could be adopted to govern the allocation of responsibility for different kinds of adverse impacts arising from the operation of AI systems.
    It will address the following main questions:
    - Who bears responsibility for the adverse consequences of advanced digital technologies, such as AI? 
    - What consequences stem from the fact that most data processing infrastructures are in private hands?
    2. Summary of Issues Discussed

    The panellists discussed a range of issues related to attribution of responsibility for adverse human rights effects stemming from application of AI technologies. The debate, in particular, touched upon the potential of regulation and of self-regulation for effectively addressing the issue. There was broad consensus that only clear regulatory frameworks are capable of serving as a firm ground for the rule of law-based approach which is key for the protection of human rights. There was further agreement that such clear regulatory frameworks are of interest for businesses as much as for the users as they privide concrete instructions on what needs to be done for human rights protection.

    It was emphasised that a lack of understanding of what AI is and how it functions creates a lot of mystification around the technology. Concrete information is needed to  inform regulation - e.g., false positives/negatives need to be evaluated in real numbers.

    The concepts of informed trust and of responsible AI were introduced. The panellists outlined in very clear terms what the technical community and the business community can do to ensure effective and enforceable accountability.

     

    3. Policy Recommendations or Suggestions for the Way Forward

    The panellists agreed that:

    - there is a need for impact assessment - in concrete areas (such as ADM, facial recognition or incurred data use) and in measurable terms, encompassing a full range of human rights and a whole life cycle of AI technologies;

    - there is a need for a clearer understanding of what we mean by transparency, accountability and other key principles;

    - empowerment of users must be one of the key elements of relevant policies introduced by governments and private actors alike;

    - there is a need for effective multi-stakeholder cooperation, in particular in bridging the gap between the tech community and the legislature.

    4. Other Initiatives Addressing the Session Issues

    IEEE representative informed the audience about the ongoing work on a set of technical standards on how to put ethics into the code, and about the currently starting work on a certification system.

    The Council of Europe has prepared a draft recommendation of the Committee of Ministers on the human rightys impacts of algorithmic systems.

    5. Making Progress for Tackled Issues

    The need for quality and targeted research, for effective multi-stakeholder cooperation and for a comprehensife revision of the existing regulatory frameworks with a view to identifying areas where safeguards for human rights protection are mission were mentioned as indispensible condition for progress.

    6. Estimated Participation

    Onsite participation: approximately 250 participants, gender balance - roughly 50/50 (%)

    Online participation: no information. No questions from online participants.

    7. Reflection to Gender Issues

    The session did not directly discuss gender issues. It touched, however, on other vulnerable groups - in particular, children - that need special protection in the digital environment. The discusion also strongly emphasised that discrimination is one of the most severe risks stemming from the use of AI technologies, as the latter tend to amplify existing inequalities and biases.