Learning Deep Autoencoders without Layer-wise Training.

2014 
Although greedy layer-wise pre-training has achieved a lot of success in deep neural networks, due to the locality of the method, higher layers of the network may not learn representations that are useful for the original input. In this work, a novel unsupervised joint training method that tries to integrate multiple single layer training objectives into one global objective is proposed. It not only mimics the layer-wise training scheme locally, but also adjusts all the weights together based on the reconstruction loss from end-to-end. Results show that it extracts more representative features and achieves better classification results in the unsupervised setting, and also achieves comparable performance in the supervised setting as compared to the greedy layer-wise approach while being faster.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    26
    References
    3
    Citations
    NaN
    KQI
    []