Towards Automatic Real-Time Estimation of Observed Learner’s Attention Using Psychophysiological and Affective Signals: The Touch-Typing Study Case

2017 
This paper presents an experimental study on the real-time estimation of observed learners’ attention given the task of touch-typing. The aim is to examine whether the observed attention estimates gathered from human raters can be computationally modeled in real time, based on the learner’s psychophysiological and affective signals. A key observation from this paper is that the observed attention varies continuously and throughout the task. The findings show that a relatively high sampling interval is required for the modeling of observed learners’ attention, which is impossible to achieve with traditional assessment methods (e.g., between-session self-reports). The results show that multiple linear regression models were relatively successful at discriminating low and high levels of the observed attention. In the best case, the within-learner model performed with the goodness-of-fit adjusted $R^{2}_{{ adj}} = 0.888$ and RMSE = 0.103 (range of the attention scores 1–5). However, the multiple linear model underperformed in the estimation of the observed attention between learners, indicating that the differences among the learners are often significant and cannot be overcome by a general linear model of attention. The between-learner model achieved an adjusted $R^{2}_{{ adj}} = 0.227$ and RMSE = 0.708), explaining only 22.7% of the variability. The influence of individual psychophysiological and affective signals (eye gaze, pupil dilation, and valence and arousal) on the estimation of the observed attention was also examined. The results show that both affective dimensions (valence and arousal), as well as the EyePos2D offset (the distance of an eye from the average position in the $xy$ plane parallel to the screen), and the EyePos-Z (the distance of an eye from the screen) significantly and most frequently influence the performance of the within-learner model.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    93
    References
    3
    Citations
    NaN
    KQI
    []