Learning Robust Discriminant Subspace Based on Joint L2,p- and L2,s-Norm Distance Metrics.

2020 
Recently, there are many works on discriminant analysis, which promote the robustness of models against outliers by using L₁- or L2,1-norm as the distance metric. However, both of their robustness and discriminant power are limited. In this article, we present a new robust discriminant subspace (RDS) learning method for feature extraction, with an objective function formulated in a different form. To guarantee the subspace to be robust and discriminative, we measure the within-class distances based on L2,s-norm and use L2,p-norm to measure the between-class distances. This also makes our method include rotational invariance. Since the proposed model involves both L2,p-norm maximization and L2,s-norm minimization, it is very challenging to solve. To address this problem, we present an efficient nongreedy iterative algorithm. Besides, motivated by trace ratio criterion, a mechanism of automatically balancing the contributions of different terms in our objective is found. RDS is very flexible, as it can be extended to other existing feature extraction techniques. An in-depth theoretical analysis of the algorithm's convergence is presented in this article. Experiments are conducted on several typical databases for image classification, and the promising results indicate the effectiveness of RDS.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    15
    Citations
    NaN
    KQI
    []