Variance-Reduced Methods for Machine Learning
2020
Stochastic optimization lies at the heart of machine learning, and its cornerstone is stochastic gradient descent (
SGD
), a method introduced over 60 years ago. The last eight years have seen an exciting new development: variance reduction for stochastic optimization methods. These variance-reduced (
VR
) methods excel in settings where more than one pass through the training data is allowed, achieving a faster convergence than
SGD
in theory and practice. These speedups underline the surge of interest in
VR
methods and the fast-growing body of work on this topic. This review covers the key principles and main developments behind
VR
methods for optimization with finite data sets and is aimed at nonexpert readers. We focus mainly on the convex setting and leave pointers to readers interested in extensions for minimizing nonconvex functions.
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
89
References
0
Citations
NaN
KQI