$\ell_P$ Norm Independently Interpretable Regularization Based Sparse Coding for Highly Correlated Data

2019 
Sparse coding, which aims at finding appropriate sparse representations of data with an overcomplete dictionary set, is a well-established signal processing methodology and has good efficiency in various areas. The varying sparse constraint can influence the performances of sparse coding algorithms greatly. However, commonly used sparse regularization may not be robust in high-coherence condition. In this paper, inspired from independently interpretable lasso (IILasso), which considers the coherence of sensing matrix columns in constraint to implement the strategy of selecting uncorrelated variables, we propose a new regularization by introducing $\ell _{p}$ norm $(0 into the regularization part of IILasso. The new regularization can efficiently enhance the performances in obtaining sparse and accurate coefficient. To solve the optimization problem with the new regularization, we propose to use the coordinate descent algorithm with weighted $\ell _{1}$ norm, named independently interpretable weighted lasso (IIWLasso), and the proximal operator, named independently interpretable iterative shrinkage thresholding algorithm (II-ISTA) and independently interpretable proximal operator for $\ell _{\frac {2}{3}}$ norm regularization (II2/3PO). We present synthetic data experiments and gene expression data experiments to validate the performance of our proposed algorithms. The experiment results show that all independently interpretable algorithms can perform better than their original ones in different coherence conditions. Among them, IIWLasso can obtain relatively best performance both in relative norm error and support error of synthetic data and misclassification error of tenfold cross-validating gene expression data.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    25
    References
    0
    Citations
    NaN
    KQI
    []