Speeding up online training of L1 Support Vector Machines

2016 
Paper proposes a novel experimental environment for solving a classic nonlinear Soft Margin L1 Support Vector Machine (SVM) problem using a Stochastic Gradient Descent (SGD) algorithm. Our implementation has a unique method of random sampling and alpha calculations. The developed code produces a competitive accuracy as well as very fast training of SVMs (small CPU time). The SGD model's performance is compared to the solutions of the L2 SVM obtained by software for Minimal Norm (MN-SVM) and Non-Negative Iterative Single Data Algorithm (NN-ISDA). The latter two algorithms have shown excellent performances on large datasets; which is why we chose to have our implementation of the SGD algorithm compete with them. All experiments have been done under strict double (nested) cross-validation, and the results are reported in terms of accuracy and CPU times used by the three methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    10
    References
    7
    Citations
    NaN
    KQI
    []