Indicators of Training Success in Virtual Reality Using Head and Eye Movements

2021 
An essential aspect in the evaluation of Virtual Training Environments (VTEs) is the assessment of users’ training success, preferably in real-time, e.g. to continuously adapt the training or to provide feedback. To achieve this, leveraging users’ behavioral data has been shown to be a valid option. Behavioral data include sensor data from eye trackers, head-mounted displays, and hand-held controllers, as well as semantic data like a trainee’s focus on objects of interest within a VTE. While prior works investigated the relevance of mostly one and in rare cases two behavioral data sources at a time, we investigate the benefits of the combination of three data sources. We conduct a user study with 48 participants in an industrial training task to find correlations between training success and measures extracted from different behavioral data sources. We show that all individual data sources, i.e. eye gaze position and head movement, as well as duration of objects in focus are related to training success. Moreover, we find that simultaneously considering multiple behavioral data sources allows to better explain training success. Further, we show that training outcomes can already be predicted significantly better than chance by only recording trainees for parts of their training. This could be used for dynamically adapting a VTE’s difficulty. Finally, our work further contributes to reaching the long-term goal of substituting traditional evaluation of training success (e.g. through pen-and-paper tests) with an automated approach.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    30
    References
    0
    Citations
    NaN
    KQI
    []