Comparison of Privacy-Preserving Distributed Deep Learning Methods in Healthcare.

2021 
Data privacy regulations pose an obstacle to healthcare centres and hospitals to share medical data with other organizations, which in turn impedes the process of building deep learning models in the healthcare domain. Distributed deep learning methods enable deep learning models to be trained without the need for sharing data from these centres while still preserving the privacy of the data at these centres. In this paper, we compare three privacy-preserving distributed learning techniques: federated learning, split learning, and SplitFed. We use these techniques to develop binary classification models for detecting tuberculosis from chest X-rays and compare them in terms of classification performance, communication and computational costs, and training time. We propose a novel distributed learning architecture called SplitFedv3, which performs better than split learning and SplitFedv2 in our experiments. We also propose alternate mini-batch training, a new training technique for split learning, that performs better than alternate client training, where clients take turns to train a model.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []