Parametrized Benchmarking: an outline of the idea and a feasibility study.

2020 
Performance of real-parameter global optimization algorithms is typically evaluated using sets of test problems. We propose a new methodology of extending these benchmarks to obtain a more balanced experimental design. This can be done by selectively removing some of the transformations originally used in the definitions of the test problems such as rotation, scaling, or translation. In this way, we obtain several variants of each problem parametrized by interpretable, high-level characteristics. These binary parameters are used as predictors in a multiple regression model explaining the algorithmic performance. Linear models allow for the attribution of strength and direction of performance changes to particular characteristics of the optimization problems and thus provide insight into the underlying mechanics of the investigated algorithms. The proposed ideas are illustrated with an application example showing the feasibility of the new benchmark. Parametrized benchmarking is a step towards obtaining multi-faceted insight into algorithmic performance and the optimization problems. The overall goal is to systematize a method of matching problems to algorithms and in this way constructively address the limitations imposed by the no free lunch theorem.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    7
    References
    1
    Citations
    NaN
    KQI
    []