Incremental Kernel Principal Components Subspace Inference With Nyström Approximation for Bayesian Deep Learning

2021 
As the state-of-the-art technology of Bayesian inference, based on low-dimensional principal components analysis (PCA) subspace inference methods can provide approximately accurate predictive distribution and well calibrated uncertainty. However, the main problem of PCA method is that it is a linear subspace feature extractor, and it cannot effectively represent the nonlinearly high-dimensional parameter space of deep neural networks (DNNs). Firstly, in this paper, in order to solve the main problem of the linear characteristics of PCA in high-dimensional space, we apply kernel PCA to extract higher-order statistical information in parameter space of DNNs. Secondly, to improve the efficiency of subsequent computation, we propose a strictly ordered incremental kernel PCA (InKPCA) subspace of parameter space within stochastic gradient descent (SGD) trajectories. In the proposed InKPCA subspace, we employ two approximation inference methods: elliptical slice sampling (ESS) and variational inference (VI). Finally, to further improve the memory efficiency of computing the kernel matrix, we apply Nystrom approximation to determine the suitable size of subsets in the original datasets. The novelty of this paper is that it is the first time to apply the proposed InKPCA subspace with Nystrom approximation for Bayesian inference in DNNs, and the results show that it can produce more accurate predictions and well-calibrated predictive uncertainty in regression and classification tasks of deep learning.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    35
    References
    2
    Citations
    NaN
    KQI
    []