BayesFlow: Learning complex stochastic models with invertible neural networks

2020 
Estimating the parameters of mathematical models is a common problem in almost all branches of science. However, this problem can prove notably difficult when processes and model descriptions become increasingly complex and an explicit likelihood function is not available. With this work, we propose a novel method for globally amortized Bayesian inference based on invertible neural networks which we call BayesFlow. The method learns a global probabilistic mapping between parameters and data from simulations. Using a pre-trained model, it can be used and re-used to perform fast fully Bayesian inference on multiple data sets. In addition, the method incorporates a summary network trained to embed the observed data into maximally informative summary statistics. Learning summary statistics from data makes the method applicable to various modeling scenarios where standard inference techniques fail. We demonstrate the utility of BayesFlow on challenging intractable models from population dynamics, epidemiology, cognitive science and ecology. We argue that BayesFlow provides a general framework for building reusable Bayesian parameter estimation machines for any process model from which data can be simulated.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    71
    References
    15
    Citations
    NaN
    KQI
    []