System-Level Test Case Prioritization Using Machine Learning

2016 
Regression testing is the common task of retesting software that has been changed or extended (e.g., by new features) during software evolution. As retesting the whole program is not feasible with reasonable time and cost, usually only a subset of all test cases is executed for regression testing, e.g., by executing test cases according to test case prioritization. Although a vast amount of methods for test case prioritization exist, they mostly require access to source code (i.e., white-box). However, in industrial practice, system-level testing is an important task that usually grants no access to source code (i.e., black-box). Hence, for an effective regression testing process, other information has to be employed. In this paper, we introduce a novel technique for test case prioritization for manual system-level regression testing based on supervised machine learning. Our approach considers black-box meta-data, such as test case history, as well as natural language test case descriptions for prioritization. We use the machine learning algorithm SVM Rank to evaluate our approach by means of two subject systems and measure the prioritization quality. Our results imply that our technique improves the failure detection rate significantly compared to a random order. In addition, we are able to outperform a test case order given by a test expert. Moreover, using natural language descriptions improves the failure finding rate.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    35
    References
    18
    Citations
    NaN
    KQI
    []