How Does Augmented Observation Facilitate Multimodal Representational Thinking? Applying Deep Learning to Decode Complex Student Construct

2020 
In this paper, we demonstrate how machine learning could be used to quickly assess a student’s multimodal representational thinking. Multimodal representational thinking is the complex construct that encodes how students form conceptual, perceptual, graphical, or mathematical symbols in their mind. The augmented reality (AR) technology is adopted to diversify student’s representations. The AR technology utilized a low-cost, high-resolution thermal camera attached to a smartphone which allows students to explore the unseen world of thermodynamics. Ninth-grade students (N = 314) engaged in a prediction–observation–explanation (POE) inquiry cycle scaffolded to leverage the augmented observation provided by the aforementioned device. The objective is to investigate how machine learning could expedite the automated assessment of multimodal representational thinking of heat energy. Two automated text classification methods were adopted to decode different mental representations students used to explain their haptic perception, thermal imaging, and graph data collected in the lab. Since current automated assessment in science education rarely considers multilabel classification, we resorted to the help of the state-of-the-art deep learning technique—bidirectional encoder representations from transformers (BERT). The BERT model classified open-ended responses into appropriate categories with higher precision than the traditional machine learning method. The satisfactory accuracy of deep learning in assigning multiple labels is revolutionary in processing qualitative data. The complex student construct, such as multimodal representational thinking, is rarely mutually exclusive. The study avails a convenient technique to analyze qualitative data that does not satisfy the mutual-exclusiveness assumption. Implications and future studies are discussed.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    95
    References
    4
    Citations
    NaN
    KQI
    []