Consensus Based Distributed Sparse Bayesian Learning by Fast Marginal Likelihood Maximization

2020 
For swarm systems, distributed processing is of paramount importance and Bayesian methods are preferred for their robustness. Existing distributed sparse Bayesian learning (SBL) methods rely on the automatic relevance determination (ARD), which involves a computationally complex reweighted l1-norm optimization, or they use loopy belief propagation, which is not guaranteed to converge. Hence, this paper looks into the fast marginal likelihood maximization (FMLM) method to develop a faster distributed SBL version. The proposed method has a low communication overhead, and can be distributed by simple consensus methods. The performed simulations indicate a better performance compared with the distributed ARD version, yet the same performance as the FMLM.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    0
    Citations
    NaN
    KQI
    []