RevFRF: Enabling Cross-domain Random Forest Training with Revocable Federated Learning

2021 
Random forest is one of the most heated machine learning tools in a wide range of industrial scenarios. Recently, federated learning enables efficient distributed machine learning without direct revealing of private participant data. In this paper, we present a novel framework of federated random forest (RevFRF), and further emphatically discuss the participant revocation problem of federated learning based on RevFRF. Specifically, RevFRF first introduces a suite of homomorphic encryption based secure protocols to implement federated random forest (RF). The protocols cover the whole lifecycle of an RF model, including construction, prediction, and participant revocation. Then, referring to the practical application scenarios of RevFRF, the existing federated learning frameworks ignore a fact that even every participant in federated learning cannot maintain the cooperation with others forever. In company-level cooperation, allowing the remaining companies to use a trained model that contains the memories from an off-lying company potentially leads to a significant conflict of interest. Therefore, we propose the revocable federated learning concept and illustrate how RevFRF implements participant revocation in applications.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []