Fret: Functional Reinforced Transformer with BERT for Code Summarization

2020 
Code summarization has long been viewed as a challenge in software engineering because of the difficulties of understanding source code and generating natural language. Some mainstream methods combine abstract syntax trees with language models to capture the structural information of the source code and generate relatively satisfactory comments. However, these methods are still deficient in code understanding and limited by the long dependency problem. In this paper, we propose a novel model called Fret, which stands for Functional REinforced Transformer with BERT. The model provides a new way to generate code comments by learning code functionalities and deepening code understanding while alleviating the problem of long dependency. For this purpose, a novel reinforcer is proposed for learning the functional contents of code so that more accurate summaries to describe the code functionalities can be generated. In addition, a more efficient algorithm is newly designed to capture the source code structure. The experimental results show that the effectiveness of our model is remarkable. Fret significantly outperforms all the state-of-the-art methods we examine. It pushes the BLEU-4 score to 24.32 for Java code summarization (14.23% absolute improvement) and the ROUGE-L score to 40.12 for Python. An ablation test is also conducted to further explore the impact of each component of our method.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    5
    Citations
    NaN
    KQI
    []