Autoencoder node saliency: Selecting relevant latent representations

2019 
Abstract The autoencoder is an artificial neural network that performs nonlinear dimension reduction and learns hidden representations of unlabeled data. With a linear transfer function it is similar to the principal component analysis (PCA). While both methods use weight vectors for linear transformations, the autoencoder does not come with any indication similar to the eigenvalues in PCA that are paired with eigenvectors. We propose a novel autoencoder node saliency method that examines whether the features constructed by autoencoders exhibit properties related to known class labels. The supervised node saliency ranks the nodes based on their capability of performing a learning task. It is coupled with the normalized entropy difference (NED). We establish a property for NED values to verify classifying behaviors among the top ranked nodes. By applying our methods to real datasets, we demonstrate their ability to provide indications on the performing nodes and explain the learned tasks in autoencoders.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    58
    References
    18
    Citations
    NaN
    KQI
    []