The Description Length of Deep Learning models

2018 
Deep learning models often have more parameters than observations, and still perform well. This is sometimes described as a paradox. In this work, we show experimentally that despite their huge number of parameters, deep neural networks can compress the data losslessly \emph{even when taking the cost of encoding the parameters into account}. Such a compression viewpoint originally motivated the use of \emph{variational methods} in neural networks \cite{Hinton,Schmidhuber1997}. However, we show that these variational methods provide surprisingly poor compression bounds, despite being explicitly built to minimize such bounds. This might explain the relatively poor practical performance of variational methods in deep learning. Better encoding methods, imported from the Minimum Description Length (MDL) toolbox, yield much better compression values on deep networks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    48
    Citations
    NaN
    KQI
    []