A NON-NESTED INFILLING STRATEGY FOR MULTI-FIDELITY BASED EFFICIENT GLOBAL OPTIMIZATION

2020 
Efficient Global Optimization (EGO) has become a standard approach for the global optimization of complex systems with high computational costs. EGO uses a training set of objective function values computed at selected input points to construct a statistical surrogate model, with low evaluation cost, on which the optimization procedure is applied. The training set is sequentially enriched, selecting new points, according to a prescribed infilling strategy, in order to converge to the optimum of the original costly model. Multi-fidelity approaches combining evaluations of the quantity of interest at different fidelity levels have been recently introduced to reduce the computational cost of building a global surrogate model. However, the use of multi-fidelity approaches in the context of EGO is still a research topic. In this work, we propose a new effective infilling strategy for multi-fidelity EGO. Our infilling strategy has the particularity of relying on non-nested training sets, a characteristic that comes with several computational benefits. For the enrichment of the multi-fidelity training set, the strategy selects the next input point together with the fidelity level of the objective function evaluation. This characteristic is in contrast with previous nested approaches, which require estimation all lower fidelity levels and are more demanding to update the surrogate. The resulting EGO procedure achieves a significantly reduced computational cost, avoiding computations at useless fidelity levels whenever possible, but it is also more robust to low correlations between levels and noisy estimations. Analytical problems are used to test and illustrate the efficiency of the method. It is finally applied to the optimization of a fully nonlinear fluid-structure interaction system to demonstrate its feasibility on real large-scale problems, with fidelity levels mixing physical approximations in the constitutive models and discretization refinements.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []