Many hands make many fingers to point: challenges in creating accountable AI

2021 
Given the complexity of teams involved in creating AI-based systems, how can we understand who should be held accountable when they fail? This paper reports findings about accountable AI from 26 interviews conducted with stakeholders in AI drawn from the fields of AI research, law, and policy. Participants described the challenges presented by the distributed nature of how AI systems are designed, developed, deployed, and regulated. This distribution of agency, alongside existing mechanisms of accountability, responsibility, and liability, creates barriers for effective accountable design. As agency is distributed across the socio-technical landscape of an AI system, users without deep knowledge of the operation of these systems become disempowered, unable to challenge or contest when it impacts their lives. In this context, accountability becomes a matter of building systems that can be challenged, interrogated, and, most importantly, adjusted in use to accommodate counter-intuitive results and unpredictable impacts. Thus, accountable system design can work to reconfigure socio-technical landscapes to protect the users of AI and to prevent unjust apportionment of risk.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    59
    References
    0
    Citations
    NaN
    KQI
    []