Full Encoder: Make Autoencoders Learn Like PCA

2021 
While the beta-VAE family is aiming to find disentangled representations and acquire human-interpretable generative factors, like what an ICA does in the linear domain, we propose Full Encoder: a novel unified autoencoder framework as a correspondence to PCA in the non-linear domain. The idea is to train an autoencoder with one latent variable first, then involve more latent variables progressively to refine the reconstruction results. The latent variables acquired with Full Encoder is stable and robust, as they always learn the same representation regardless the network initial states. Full Encoder can be used to determine the degrees of freedom in a non-linear system, and is useful for data compression or anomaly detection. Full Encoder can also be combined with beta-VAE framework to sort out the importance of the generative factors, providing more insights for non-linear system analysis. We created a toy dataset with a non-linear system to test the Full Encoder and compare its results to VAE and beta-VAE's results.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    9
    References
    0
    Citations
    NaN
    KQI
    []