Return of the Infinitesimal Jackknife

2018 
The error or variability of machine learning algorithms is often assessed by repeatedly re-fitting a model with different weighted versions of the observed data. The ubiquitous tools of cross validation (CV) and the bootstrap are examples of this technique. These methods are powerful in large part due to their model agnosticism but can be slow to run on modern, large data sets due to the need to repeatedly re-fit the model. In this work, we use a linear approximation to the dependence of the fitting procedure on the weights, producing results that can be faster than repeated re-fitting by orders of magnitude. We provide explicit finite-sample error bounds for the approximation in terms of a small number of simple, verifiable assumptions. Our results apply whether the weights and data are stochastic, deterministic, or even adversarially chosen, and so can be used as a tool for proving the accuracy of a wide variety of problems. As a corollary, we state mild regularity conditions under which our approximation consistently estimates true leave-k-out cross validation for any fixed k. We demonstrate the accuracy of our methods on a range of simulated and real datasets.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    6
    References
    4
    Citations
    NaN
    KQI
    []