Fast differentiable DNA and protein sequence optimization for molecular design.

2020 
Designing DNA and protein sequences with improved or novel function has the potential to greatly accelerate synthetic biology. Machine learning models that accurately predict biological fitness from sequence are becoming a powerful tool for molecular design. Activation maximization offers a simple design strategy for differentiable models: one-hot coded sequences are first approximated by a continuous representation which is then iteratively optimized with respect to the predictor oracle by gradient ascent. While elegant, this method is limited by technical challenges, as it suffers from vanishing gradients and may cause predictor pathologies leading to poor convergence. Here, we build on a previously proposed straight-through approximation method to optimize through discrete sequence samples. By normalizing nucleotide logits across positions and introducing an adaptive entropy variable, we remove bottlenecks arising from overly large or skewed sampling parameters. This results in a markedly improved algorithm with up to 100-fold faster convergence. Moreover, our method finds improved fitness optima compared to existing methods, including the original algorithm without normalization and global optimization heuristics such as Simulated Annealing. We demonstrate our improved method by designing DNA and enzyme sequences for six deep learning predictors, including a protein structure predictor (trRosetta).
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    43
    References
    14
    Citations
    NaN
    KQI
    []