SWAG: A Wrapper Method for Sparse Learning

2020 
Predictive power has always been the main research focus of learning algorithms with the goal of minimizing the test error for supervised classification and regression problems. While the general approach for these algorithms is to consider all possible attributes in a dataset to best predict the response of interest, an important branch of research is focused on sparse learning in order to avoid overfitting which can greatly affect the accuracy of out-of-sample prediction. However, in many practical settings we believe that only an extremely small combination of different attributes affect the response whereas even sparse-learning methods can still preserve a high number of attributes in high-dimensional settings and possibly deliver inconsistent prediction performance. As a consequence, the latter methods can also be hard to interpret for researchers and practitioners, a problem which is even more relevant for the “black-box”-type mechanisms of many learning approaches. Finally, aside from needing to quantify prediction uncertainty, there is often a problem of replicability since not all data-collection procedures measure (or observe) the same attributes and therefore cannot make use of proposed learners for testing purposes. To address all the previous issues, we propose to study a procedure that combines screening and wrapper methods and aims to find a library of extremely low-dimensional attribute combinations (with consequent low data collection and storage costs) in order to (i) match or improve the predictive performance of any particular learning method which uses all attributes as an input (including sparse learners); (ii) provide a low-dimensional network of attributes easily interpretable by researchers and practitioners; and (iii) increase the potential replicability of results due to a diversity of attribute combinations defining strong learners with equivalent predictive power. We call this algorithm “Sparse Wrapper AlGorithm” (SWAG).
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    3
    Citations
    NaN
    KQI
    []