Biological network-inspired interpretable variational autoencoder

2020 
Deep learning architectures such as variational autoencoders have revolutionized the analysis of transcriptomics data. However, the latent space of these variational autoencoders offers little to no interpretability. To provide further biological insights, we introduce a novel sparse Variational Autoencoder architecture, VEGA (Vae Enhanced by Gene Annotations), whose decoder wiring is inspired by a priori characterized biological abstractions, providing direct interpretability to the latent variables. We demonstrate the interpretability and flexibility of VEGA in diverse biological contexts, by integrating various sources of biological abstractions such as pathways, gene regulatory networks and cell type identities in the latent space of our model. We show that our model could recapitulate the mechanism of cellular-specific response to treatments, the status of master regulators as well as jointly investigate the cell type and cellular state identity in developing cells. We envision the approach could serve as an explanatory biological model in contexts such as development and drug treatment experiments.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    24
    References
    1
    Citations
    NaN
    KQI
    []