Sparse Coding and Autoencoders
2018
In this work we study the landscape of squared loss of an Autoencoder when the data generative model is that of “Sparse Coding”/“Dictionary Learning”. The neural net considered is an $\mathbb{R}^{n}\rightarrow \mathbb{R}^{n}$ mapping and has a single ReLU activation layer of size $h > n$ . The net has access to vectors $y\in \mathbb{R}^{n}$ obtained as $y=A^{\ast}x^{\ast}$ where $x^{\ast}\in \mathbb{R}^{h}$ are sparse high dimensional vectors and $A^{\ast}\in \mathbb{R}^{n\times h}$ is an overcomplete incoherent matrix. Under very mild distributional assumptions on $x^{\ast}$ , we prove that the norm of the expected gradient of the squared loss function is asymptotically (in sparse code dimension) negligible for all points in a small neighborhood of $A^{\ast}$ . This is supported with experimental evidence using synthetic data. We conduct experiments to suggest that $A^{\ast}$ sits at the bottom of a well in the landscape and we also give experiments showing that gradient descent on this loss function gets columnwise very close to the original dictionary even with far enough initialization. Along the way we prove that a layer of ReLU gates can be set up to automatically recover the support of the sparse codes. Since this property holds independent of the loss function we believe that it could be of independent interest. A full version of this paper is accessible at: https://arxiv.org/abs/1708.03735
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
42
References
4
Citations
NaN
KQI