Predicting Worker Accuracy from Nonverbal Behaviour: Benefits and Potential for Algorithmic Bias

2021 
With the rise of algorithmic management (and online work in general), there is a growing interest in techniques that can monitor worker performance. For example, if a system can automatically detect whether a worker is becoming distracted or disengaged, it can intervene to motivate the worker or flag their output as requiring further quality control. Prior research has explored the potential for detecting nonverbal cues that could predict mistakes (e.g., detecting boredom in students or fatigue in drivers). Here, we learn a model that reliably predicts worker accuracy from nonverbal behaviour during tedious and repetitive annotation tasks. We show that the annotation accuracy can be substantially improved by discarding annotations that are predicted to be of low accuracy by the model. While these results are promising, recent concerns about algorithmic bias led us to further investigate whether the accuracy is influenced by skin tone. Unfortunately, we find that the algorithm showed systematic bias that disadvantaged some dark-skinned workers and incorrectly rewarded some with lighter-skin. We discuss the apparent reasons for this bias and suggestions for how and if such methods could be deployed to enhance worker engagement.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    33
    References
    0
    Citations
    NaN
    KQI
    []