Trust of Learning Systems: Considerations for Code, Algorithms, and Affordances for Learning

2018 
This chapter provides a synthesis on the literature for Machine Learning (ML), trust in automation, trust in code, and transparency. The chapter introduces the concept of ML and discusses three drivers of trust in ML-based systems: code structure; algorithm performance, transparency, and error management – algorithm factors; and affordances for learning. Code structure offers a static affordance for trustworthiness evaluations that can be both deep and peripheral. The overall performance of the algorithms and the transparency of the inputs, process, and outputs provide an opportunity for dynamic and experiential trustworthiness evaluations. Predictability and understanding are the foundations of trust and must be considered in ML applications. Many ML paradigms neglect the notion of environmental affordances for learning, which from a trust perspective, may in fact be the most important differentiator between ML systems and traditional automation. The learning affordances provide contextualised pedigree for trust considerations. In combination, the trustworthiness aspects of the code, dynamic performance and transparency, and learning affordances offer structural, evidenced performance and understanding, as well as pedigree information from which ML approaches can be evaluated.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    40
    References
    4
    Citations
    NaN
    KQI
    []