Tool for Testing Bad Student Programs.

2014 
Nowadays, teachers, especially on computer science classes, are faced with grading an ever increasing number of students’ assignments. There are three ways to avoid overburdening them: reducing the number of students, increasing the number of teachers, and transferring some of the load from the teachers to the computers. As it is obvious that the first two approaches are often not feasible, getting help from computers and introducing some form of automatic assessment in everyday practice becomes the only possible way of dealing with this issue in an efficient way. This approach, at least theoretically, also increases objectivity of the grading process. However, fully automated code assessment, with all the benefits it brings, also has its drawbacks. The major one lies in the fact that not everything can be tested by a machine. Typical examples are finer points of coding style like the correct use of procedures and recursion, which are very hard to catch even by very complex metrics. Unfortunately, many approaches in automated assessment are focusing on automation of grading of all aspects of students’ solutions. This often lead to adaptation of courses and assignments to automated assessment, while the opposite should be prefered. Such trend is inappropriate and can prove to be very unfavorable to students, especially on first year programming courses. The main problem arises from the fact that the beginner students are still learning how to program. They are not yet knowledgeable and disciplined enough to follow strict and rigid program specifications
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    13
    References
    0
    Citations
    NaN
    KQI
    []