Sub-linear Time Support Recovery for Compressed Sensing using Sparse-Graph Codes

2019 
We study the support recovery problem for compressed sensing, where the goal is to reconstruct the sparsity pattern of a high-dimensional $K$ -sparse signal $\mathrm {x}\in \mathbb {R}^{N}$ , as well as the corresponding sparse coefficients, from low-dimensional linear measurements with and without noise. Our key contribution is a new compressed sensing framework through a new family of carefully designed sparse measurement matrices associated with minimal measurement costs and a low-complexity recovery algorithm. Specifically, the measurement matrix in our framework is designed based on the well-crafted sparsification through capacity-approaching sparse-graph codes , where the sparse coefficients can be recovered efficiently in a few iterations by performing simple error decoding over the observations. We formally connect this general recovery problem with sparse-graph decoding in packet communication systems and analyze our framework in terms of the measurement cost, computational complexity, and recovery performance. Specifically, we show that in the noiseless setting, our framework can recover any arbitrary $K$ -sparse signal in $O(K)$ time using $2K$ measurements asymptotically with a vanishing error probability . In the noisy setting, when the sparse coefficients take values in a finite and quantized alphabet, our framework can achieve the same goal in time $O(K\log (N/K))$ using $O(K\log (N/K))$ measurements obtained from measurement matrix with elements {−1, 0, 1}. When the sparsity $K$ is sub-linear in the signal dimension $K=O(N^\delta )$ for some $0 , our results are order-optimal in terms of measurement costs and run-time, both of which are sub-linear in the signal dimension $N$ . The sub-linear measurement cost and run-time can also be achieved with continuous-valued sparse coefficients, with a slight increment in the logarithmic factors. More specifically, in the continuous alphabet setting, when $K=O(N^\delta )$ and the magnitudes of all the sparse coefficients are bounded below by a positive constant, our algorithm can recover an arbitrarily large $(1-p)$ -fraction of the support of the sparse signal using $O(K\log (N/K)\log \log (N/K))$ measurements, and $O(K\log ^{1+r}(N/K))$ run-time, where $r$ is an arbitrarily small constant. For each recovered sparse coefficient, we can achieve $O(\epsilon )$ error for an arbitrarily small constant $\epsilon $ . In addition, if the magnitudes of all the sparse coefficients are upper bounded by $O(K^{c})$ for some constant $c , then we are able to provide a strong $\ell _{1}$ recovery guarantee for the estimated signal $\widehat { \mathrm {x}}$ : $\|\widehat { \mathrm {x}} - \mathrm {x}\|_{1} \le \kappa \| \mathrm {x}\|_{1}$ , where the constant $\kappa $ can be arbitrarily small. This offers the desired scalability of our framework that can potentially enable real-time or near-real-time processing for massive datasets featuring sparsity, which are relevant to a multitude of practical applications.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    72
    References
    6
    Citations
    NaN
    KQI
    []