Biasing the transition of Bayesian optimization algorithm between Markov chain states in dynamic environments

2016 
When memory-based evolutionary algorithms are applied in dynamic environments, the certainly use of uncertain prior knowledge for future environments may mislead the evolutionary algorithms. To address this problem, this paper presents a new, memory-based evolutionary approach for applying the Bayesian optimization algorithm (BOA) in dynamic environments. Our proposed method, unlike existing memory-based methods, uses the knowledge of former environments probabilistically in future environments. For this purpose, the run of BOA is modeled as the movements in a Markov chain, in which the states become the Bayesian networks that are learned in every generation. When the environment changes, a stationary distribution of the Markov chain is defined on the basis of the retrieved prior knowledge. Then, the transition probabilities of BOA in the Markov chain are modified (biased) to comply with the defined stationary distribution. To this end, we employ the Metropolis algorithm and modify the K2 algorithm for learning the Bayesian network in BOA in order to reflect the obtained transition probabilities. Experimental results show that the proposed method achieves improved performance compared to conventional methods, especially in random environments.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    56
    References
    4
    Citations
    NaN
    KQI
    []