language-icon Old Web
English
Sign In

Steps Towards Value-Aligned Systems

2020 
Algorithmic (including AI/ML) decision-making artifacts are an established and growing part of our decision-making ecosystem. They are now indispensable tools that help us manage the flood of information we use to try to make effective decisions in a complex world. The current literature is full of examples of how individual artifacts violate societal norms and expectations (e.g. violations of fairness, privacy, or safety norms). Against this backdrop, this discussion highlights an under-emphasized perspective in the body of research focused on assessing value misalignment in AI-equipped sociotechnical systems. The research on value misalignment so far has a strong focus on the behavior of individual tech artifacts. This discussion argues for a more structured systems-level approach for assessing value-alignment in sociotechnical systems. We rely primarily on the research on fairness to make our arguments more concrete. And we use the opportunity to highlight how adopting a system perspective improves our ability to explain and address value misalignments better. Our discussion ends with an exploration of priority questions that demand attention if we are to assure the value alignment of whole systems, not just individual artifacts.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    11
    References
    2
    Citations
    NaN
    KQI
    []