Evolutionary expectation maximization

2018 
We establish a link between evolutionary algorithms (EAs) and learning of probabilistic generative models with binary hidden variables. Learning is formulated as approximate maximum likelihood optimization using variational expectation maximization. When choosing truncated posteriors as variational distributions, the variational parameters take the form of sets of latent states. By (A) identifying these sets with populations of genomes, and by (B) defining the fitness of individuals as the joint probability of the chosen generative model, the sets of latent states can be optimized using EAs. We obtain scalable learning algorithms that effectively improve the tractable free energy objective of truncated posteriors. While this novel approach is applicable to any model with binary latents and tractable joint probability, as a proof of concept, we here apply it to the optimization of parameters of noisy-OR Bayes Nets (modeling binary data) and Binary Sparse Coding (modelling continuous data). We find that the data likelihood is efficiently improved by employing genetic algorithms with point mutations and single-point cross-over as EAs. In general we believe that, with the novel link established here, standard as well as recent results in evolutionary optimization can be leveraged to address the difficult problem of parameter optimization in generative models.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    54
    References
    7
    Citations
    NaN
    KQI
    []