Hyperspherical Weight Uncertainty in Neural Networks

2021 
Bayesian neural networks learn a posterior probability distribution over the weights of the network to estimate the uncertainty in predictions. Parameterization of prior and posterior distribution as Gaussian in Monte Carlo Dropout, Bayes-by-Backprop (BBB) often fails in latent hyperspherical structure [1, 15]. In this paper, we address an enhanced approach for selecting weights of a neural network [2] corresponding to each layer with a uniform distribution on the Hypersphere to efficiently approximate the posterior distribution, called Hypersphere Bayes by Backprop. We show that this Hyperspherical Weight Uncertainty in Neural Networks is able to model a richer variational distribution than previous methods and obtain well-calibrated predictive uncertainty in deep learning in non-linear regression, image classification and high dimensional active learning. We then demonstrate how this uncertainty in the weights can be used to improve generalisation in Variational Auto-Encoder (VAE) problem.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    10
    References
    0
    Citations
    NaN
    KQI
    []