Empirical Auto-Evaluation of Python Code for Performance Analysis of Transformer Network Using T5 Architecture

2021 
The immense real-time applicability of Python coding makes the task of evaluating the code highly intriguing, in the Natural Language Processing (NLP) domain. Evaluation of computer programs induces a challenge of logical and arithmetic understanding. Therefore, it is indeed very relevant to analyze the empirical ability of current state-of-the-art sequence-based neural architectures in evaluating small computer programs. One of the possible applications of such analysis is the auto-evaluation of erroneous Python code. In this context, we focused our work on evaluating small python code blocks with or without error and examined the efficiency of the latest T5 Transformer network model in this task. In terms of accuracy, different Rouge scores, and BLEU scores, the performance measurements has been calculated. Observations reveal that T5 Transformer is able to compute the output for both correct and erroneous python code blocks with more than 65% accuracy.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    0
    Citations
    NaN
    KQI
    []