Providing Meaningful Feedback for Autograding of Programming Assignments

2018 
Autograding systems are increasingly being deployed to meet the challenge of teaching programming at scale. We propose a methodology for extending autograders to provide meaningful feedback for incorrect programs. Our methodology starts with the instructor identifying the concepts and skills important to each programming assignment, designing the assignment, and designing a comprehensive test suite. Tests are then applied to code submissions to learn classes of common errors and produce classifiers to automatically categorize errors in future submissions. The instructor maps the errors to concepts and skills and writes hints to help students find their misconceptions and mistakes. We have applied the methodology to two assignments from our Introduction to Computer Science course. We used submissions from one semester of the class to build classifiers and write hints for observed common errors. We manually validated the automatic error categorization and potential usefulness of the hints using submissions from a second semester. We found that the hints given for erroneous submissions should be helpful for 96% or more of the cases. Based on these promising results, we have deployed our hints and are currently collecting submissions and feedback from students and instructors.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    16
    References
    16
    Citations
    NaN
    KQI
    []