A Training Difficulty Schedule for Effective Search of Meta-Heuristic Design

2020 
In the context of optimization problems, the performance of an algorithm depends on the problem. It is difficult to know a priori what algorithm (and what parameters) will perform best on a new problem. For this reason, we previously proposed a framework that uses grammatical evolution to automatically generate swarm intelligence algorithms given a training problem. However, we observed two issues that affected the results of the framework. The first issue was that sometimes the training problems are too easy, and any candidate algorithm could solve it or too difficult that no candidate algorithm could solve them. The second issue was the presence of parameters in the grammar, which causes a significant increase in the search space. In this work, we addressed those issues by investigating three training schedules in which the problems start easy and get harder over time. We also investigated whether numerical parameters should be part of the grammar. We compared these training schedules to the previous one and compared the performance of the generated algorithms against the traditional algorithms, which are DE, PSO, and CS. We found that gradually increasing the difficulty of the training problem produced algorithms that could solve more testing instances than training only in 10-D. The results suggest that a step-by-step increase in difficulty is a better approach overall. We also found that including parameters in the grammar resulted in algorithms on par with the traditional meta-heuristics. Besides, as expected, our results show that removing parameters from the grammar exhibit the worst overall performance. However, interestingly it could solve most of the testing instances within the given testing budget.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    1
    Citations
    NaN
    KQI
    []