Critical initialization of wide and deep neural networks through partial Jacobians: general theory and applications to LayerNorm
2021
Deep neural networks are notorious for defying theoretical treatment. However, when the number of parameters in each layer tends to infinity the network function is a Gaussian process (GP) and quantitatively predictive description is possible. Gaussian approximation allows to formulate criteria for selecting hyperparameters, such as variances of weights and biases, as well as the learning rate. These criteria rely on the notion of criticality defined for deep neural networks. In this work we describe a new way to diagnose (both theoretically and empirically) this criticality. To that end, we introduce partial Jacobians of a network, defined as derivatives of preactivations in layer $l$ with respect to preactivations in layer $l_0
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
36
References
0
Citations
NaN
KQI