An Alternating Minimization Strategy for Sparse Blind Deconvolution - Convergence Analysis and Concentration Inequalities.

2017 
We consider the problem of sparse blind deconvolution (SBD), in which one estimates the blur kernel and the sparse excitation. Considering linear convolution, we derive a sufficient condition for stable deconvolution and show that the columns of the linear convolution matrix form a Riesz basis with the tightness of the Riesz bounds determined by the autocorrelation of the blur kernel. The SBD cost function developed within a Bayesian framework results in a non-convex and non-smooth cost function consisting of an $\ell_2$ data-fidelity term and an $\ell_p$-norm ($0 \le p \le 1$) regularizer as the sparsity promoting prior. We employ an alternating minimization (Atl. Min.) strategy to optimize the cost, considering $\ell_p$ and $\ell_2$ norm optimization in the iterations. The resulting algorithm is referred to as the alternating $\ell_p-\ell_2$ projections algorithm (ALPA). Due to non-convexity of the cost, the convergence of the algorithm to a local minimum is largely influenced by the initialization. Considering the least-squares estimate as the initialization, we derive probabilistic bounds on the average absolute error between the true excitation and an estimate exceeding a threshold. The bounds depend on the conditioning of the system and noise variance. Considering bounded noise helps us tighten the bounds using the Hoeffding inequality. Further, we show that, in every iteration, the $\epsilon$-regularized cost function is non-increasing and more importantly, bounds the original $\ell_p$-based cost. We show applications to blind deconvolution of natural speech signals and show that the proposed strategy is capable of separating a speech signal into a sparse excitation and vocal tract response. We also report comparisons with some state-of-the-art deconvolution algorithms.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    59
    References
    0
    Citations
    NaN
    KQI
    []