Non-Uniform Stochastic Average Gradient Method for Training Conditional Random Fields

2015 
We apply stochastic average gradient (SAG) algorithms for training conditional random elds (CRFs). We describe a practical implementation that uses structure in the CRF gradient to reduce the memory requirement of this linearly-convergent stochastic gradient method, propose a non-uniform sampling scheme that substantially improves practical performance, and analyze the rate of convergence of the SAGA variant under nonuniform sampling. Our experimental results reveal that our method signicantly outperforms existing methods in terms of the training objective, and performs as well or better than optimally-tuned stochastic gradient methods in terms of test error.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    46
    References
    30
    Citations
    NaN
    KQI
    []