Abstract (EN):
Automated assessment tools for programming assignments have become increasingly popular in computing education. These tools offer a cost-effective and highly available way to provide timely and consistent feedback to students. However, when evaluating a logically incorrect source code, there are some reasonable concerns about the formative gap in the feedback generated by such tools compared to that of human teaching assistants. A teaching assistant either pinpoints logical errors, describes how the program fails to perform the proposed task, or suggests possible ways to fix mistakes without revealing the correct code. On the other hand, automated assessment tools typically return a measure of the program's correctness, possibly backed by failing test cases and, only in a few cases, fixes to the program. In this paper, we introduce a tool, AsanasAssist, to generate formative feedback messages to students to repair functionality mistakes in the submitted source code based on the most similar algorithmic strategy solution. These suggestions are delivered with incremental levels of detail according to the student's needs, from identifying the block containing the error to displaying the correct source code. Furthermore, we evaluate how well the automatically generated messages provided by AsanasAssist match those provided by a human teaching assistant. The results demonstrate that the tool achieves feedback comparable to that of a human grader while being able to provide it just in time.
Language:
English
Type (Professor's evaluation):
Scientific
No. of pages:
18