L1-norm Laplacian support vector machine for data reduction in semi-supervised learning
2021
As a semi-supervised learning method, Laplacian support vector machine (LapSVM) is popular. Unfortunately, the model generated by LapSVM has a poor sparsity. A sparse decision model has always been fascinating because it could implement data reduction and improve performance. To generate a sparse model of LapSVM, we propose an $$\ell _1$$
-norm Laplacian support vector machine (
$$\ell _1$$
-norm LapSVM), which replaces the $$\ell _2$$
-norm with the $$\ell _1$$
-norm in LapSVM. The $$\ell _1$$
-norm LapSVM has two techniques that can induce sparsity: the $$\ell _1$$
-norm regularization and the hinge loss function. We discuss two situations for the $$\ell _1$$
-norm LapSVM, linear and nonlinear ones. In the linear $$\ell _1$$
-norm LapSVM, the sparse decision model implies that features with nonzero coefficients are contributive. In other words, the linear $$\ell _1$$
-norm LapSVM can perform feature selection to achieve the goal of data reduction. Moreover, the nonlinear (kernel) $$\ell _1$$
-norm LapSVM can also implement data reduction in terms of sample selection. In addition, the optimization problem of the $$\ell _1$$
-norm LapSVM is a convex quadratic programming one. That is, the $$\ell _1$$
-norm LapSVM has a unique and global solution. Experimental results on semi-supervised classification tasks have shown a comparable performance of our $$\ell _1$$
-norm LapSVM.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
46
References
1
Citations
NaN
KQI