Redefining the White-Box of k-Nearest Neighbor Support Vector Machine for Better Classification
2020
Distances and similarities among patterns of data points are computed by k-Nearest Neighbor methods after Principal Component Analysis is performed to the ten datasets. Weighted distances are then formulated, computed and adjusted synergistically with the Gaussian kernel width of Support Vector Machine. This is done by the proposed formulations of this research which is derived from the study on the relationships among the distances and similarities of patterns of data points as well as the kernel width of SVM. The kernel scale of Gaussian kernel width is customized and categorized by the proposed new approach. All these are known as the white-box algorithms which are to be redefined. The algorithm developed is to avoid and minimize the tradeoff and hinge loss problems of typical SVM classifications. After applying the proposed algorithms to the datasets mainly from UCI data repositories, it is shown to be more accurate in classification when compared with typical SVM classification without getting the Gaussian kernel width adjusted accordingly. Optimal kernel width from the customized kernel scale is input to the SVM classification after being computed by the proposed formulations. It is found that dimensionality reduction by PCA and distances among patterns computed by kNN and thereafter by the proposed formulations can optimally adjust the Gaussian kernel width of SVM so that classification accuracies can significantly be improved.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
12
References
1
Citations
NaN
KQI