Minimizing Cluster Errors in LP-Based Nonlinear Classification

2014 
Recent work has focused on techniques to construct a learning machine able to classify, at any given accuracy, all members of two mutually exclusive classes. Good numerical results have been reported; however, there remain some concerns regarding prediction ability when dealing with large data bases. This paper introduces clustering, which decreases the number of variables in the linear programming models that need be solved at each iteration. Preliminary results provide better prediction accuracy, while keeping the good characteristics of the previous classification scheme: a piecewise (non)linear surface that discriminates individuals from two classes with an a priori classification accuracy is built and at each iteration, a new piece of the surface is obtained by solving a linear programming (LP) model. The technique proposed in this work reduces the number of LP variables by linking one error variable to each cluster, instead of linking one error variable to each individual in the population. Preliminary numerical results are reported on real datasets from the Irvine repository of machine learning databases.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    23
    References
    0
    Citations
    NaN
    KQI
    []