Incorporating Question Information to Enhance the Performance of Automatic Short Answer Grading.

2021 
Automatic short answer grading (ASAG) is focusing on tackling the problem of automatically assessing students’ constructed responses to open-ended questions. ASAG is still far from being a reality in NLP. Previous work mainly concentrates on exploiting feature extraction from the textual information between the student answer and the model answer. A grade will be assigned to the student based on the similarity of his/her answers and the model answer. However, ASAG models trained by the same type of features lack the capacity to deal with a diversity of conceptual representations in students’ responses. To capture multiple types of features, prior knowledge is utilized in our work to enrich the obtained features. The whole model is based on the Transformer. More specifically, a novel training approach is proposed. Forward propagation is added in the training step randomly to exploit the textual information between the provided questions and student answers in a training step. A feature fusion layer followed by an output layer is introduced accordingly for fine-tuning purposes. We evaluate the proposed model on two datasets (the University of North Texas dataset and student response analysis (SRA) dataset). A comparison is conducted on the ASAG task between the proposed model and the baselines. The performance results show that our model is superior to the recent state-of-the-art models.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []