Minimal conditions analysis of gradient-based reconstruction in Federated Learning.

2021 
The input data from a neural network may be reconstructed using knowledge of the gradients of that network, as demonstrated by \cite{zhu2019deep}. By imposing prior information and utilising a uniform initialization we demonstrate faster and more accurate image reconstruction. Exploring the theoretical limits of reconstruction, we show that a single input may be reconstructed, regardless of network depth using a fully-connected neural network with one hidden node. Then we generalize this result to a gradient averaged over mini-batches of size $B$. In this case, the full mini-batch can be reconstructed if the number of hidden units exceeds $B$, with an orthogonality regularizer to improve the precision. For a Convolutional Neural Network, the required number of filters in the first convolutional layer is decided by multiple factors (e.g., padding, kernel and stride size). Therefore, we require the number of filters, $h\geq (\frac{d}{d^{'}})^2C$, where $d$ is input width, $d^{'}$ is output width after convolution kernel, and $C$ is channel number of input. Finally, we validate our theoretical analysis and improvements using bio-medical (fMRI) and benchmark data (MNIST, Kuzushiji-MNIST, CIFAR100, ImageNet and face images).
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    19
    References
    2
    Citations
    NaN
    KQI
    []