Harnessing Low-Fidelity Data to Accelerate Bayesian Optimization via Posterior Regularization

2020 
Bayesian optimization (BO) is a powerful paradigm for derivative-free global optimization of a black-box objective function (BOF) that is expensive to evaluate. However, the overhead of BO can still be prohibitive for problems with highly expensive function evaluations. In this paper, we investigate how to reduce the required number of function evaluations for BO without compromise in solution quality. We explore the idea of posterior regularization to harness low fidelity (LF) data within the Gaussian process upper confidence bound (GP-UCB) framework. The LF data can arise from previous evaluations of an LF approximation of the BOF or a related optimization task. An extra GP model called LF-GP is trained to fit the LF data. We develop an operator termed dynamic weighted product of experts (DW-POE) fusion. The regularization is induced by this operator on the posterior of the BOF. The impact of the LF GP model on the resulting regularized posterior is adaptively adjusted via Bayesian formalism. Extensive experimental results on benchmark BOF optimization tasks demonstrate the superior performance of the proposed algorithm over state-of-the-art.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    23
    References
    0
    Citations
    NaN
    KQI
    []