Non-parametric Bayesian dictionary learning based on Laplace noise

2021 
Sparse representation based on over-complete dictionaries is a hot issue in the field of computer vision and machine learning. In probability theory, over-complete dictionary can be learned by non-parametric Bayesian techniques with Beta Process. However, traditional probabilistic dictionary learning method assumes noise follows Gaussian distribution, which can only remove Gaussain noise. In order to remove outlier or complex noise, we propose a dictionary learning method based on non-parametric Bayesian technology by assuming the noise follows Laplacian distribution. Because the non-conjugacy of Laplacian distribution makes the calculation of posteriors of latent variables more complicate, thus we utilize a superposition of an infinite number of Gaussian distributions to substitute for L1 density function. The weights of mixture Gaussian distribution are controlled by an extra hidden variable. Then the Bayesian inference is applied to learn all the key parameters in the proposed probabilistic model, which avoids the processing of parameter setting and fine tuning. In the experiments, we mainly test the performance of different algorithms in removing salt-and-pepper noise and mixture noises. The experimental results show that the PSNRs of our algorithm are higher 2-4 dB at least than other classic algorithms.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    32
    References
    0
    Citations
    NaN
    KQI
    []