When, and Why, Simple Methods Fail. Lessons Learned from Hyperparameter Tuning in Software Analytics (and Elsewhere).

2020 
Tuning a data miner for for software analytics is something of a black art. Recent research has shown that some of that tuning can be achieved via automatic tools, called "hyperparameter optimizers". Much of that research has used tools developed from outside of SE. Hence, here, we ask how and when we can exploit the special properties of SE data to build faster and better optimizers. Specifically, we apply hyperparameter optimization for 120 data sets addressing problems like bad smell detection, predicting Github issue close time, bug report analysis, defect prediction and dozens of other non-SE problems. To these, we applied a tool developed using SE data which (a) out-performs the state-of-the-art for these SE problems yet (b) fails badly on non-SE problems. From this experience, we can infer a simple rule for when to use/avoid different kinds of optimizers. SE data is often about infrequent issues, like the occasional defect, or the rarely exploited security violation, or the requirement that holds for one special case. But as we show, the same was not observed when we applied it on non-SE data. Our conclusion will be that we can exploit this special properties of SE to great effect; specifically, to find better optimizations for SE tasks via a tactic called "dodging" (explained in this paper).
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    102
    References
    1
    Citations
    NaN
    KQI
    []