Spurred by the simultaneous need for data privacy protection and data sharing, federated learning has been proposed. However, there is still a risk of privacy leakage in it. In this paper, an improved differential privacy algorithm is proposed to protect the federated learning model. And at the same time, the Fast Fourier Transform is used in the computation of the privacy budget , to minimize the impact of limited arithmetic resources and numerous users on the effectiveness of the training model. Then, discarding the various discussions that were directly on the privacy budget instead, FFT is used together with PLD in this process for calculating consumption, which further tightens the bound of computation with minimal impact on the efficiency. Moreover, the activation function for model training is improved by using a temper sigmoid with only one parameter , which much smoother the accuracy curve and reduces the drastic fluctuating scenarios. Finally, simulation results on real datasets show that the federated learning with the DP algorithm that considers the long trailing case facilitates better equalizing the relationship between privacy and utility.
Abstract Federated learning is a widely used distributed learning approach in recent years, however, despite model training from collecting data become to gathering parameters, privacy violations may occur when publishing and sharing models. A dynamic approach is proposed to add Gaussian noise more effectively and apply differential privacy to federal deep learning. Concretely, it is abandoning the traditional way of equally distributing the privacy budget and adjusting the privacy budget to accommodate gradient descent federation learning dynamically, where the parameters depend on computation derived to avoid the impact on the algorithm that hyperparameters are created manually. It also incorporates adaptive threshold cropping to control the sensitivity, and finally, moments accountant is used to counting the consumed on the privacy‐preserving, and learning is stopped only if the by clients setting is reached, this allows the privacy budget to be adequately explored for model training. The experimental results on real datasets show that the method training has almost the same effect as the model learning of non‐privacy, which is significantly better than the differential privacy method used by TensorFlow.
Abstract The incentive mechanism of federated learning has been a hot topic, but little research has been done on the compensation of privacy loss. To this end, this study uses the Local SGD federal learning framework and gives a theoretical analysis under the use of differential privacy protection. Based on the analysis, a multi‐attribute reverse auction model is proposed to be used for user selection as well as payment calculation for participation in federal learning. The model uses a mixture of economic and non‐economic attributes in making choices for users and is transformed into an optimisation equation to solve the user choice problem. In addition, a post‐auction negotiation model that uses the Rubinstein bargaining model as well as optimisation equations to describe the negotiation process and theoretically demonstrate the improvement of social welfare is proposed. In the experimental part, the authors find that their algorithm improves both the model accuracy and the F1‐score values relative to the comparison algorithms to varying degrees.
Spurred by the simultaneous need for data privacy protection and data sharing, federated learning (FL) has been proposed. However, it still poses a risk of privacy leakage in it. This paper, an improved Differential Privacy (DP) algorithm to protect the federated learning model. Additionally, the Fast Fourier Transform (FFT) is used in the computation of the privacy budget $$\epsilon_{total}$$ , to minimize the impact of limited arithmetic resources and numerous users on the effectiveness of training model. Moreover, instead of direct analyses of the privacy budget $$\epsilon$$ through various methods, Privacy Loss Distribution (PLD) and privacy curves are adopted, while the number of artificial assignments hyperparameters is reduced, and the grid parameters delineated for FFT use are improved. The improved algorithm tightens parameter bounds and minimizes human factors' influence with minimal efficiency impact. It decreases the errors caused by truncation and discreteness of PLDs while expanding the discreteness interval to reduce the calculation workload. Furthermore, an improved activation function using a temper sigmoid with only one parameter $$\tau\:$$ , smooths the accuracy curve and mitigates drastically fluctuating scenarios during model training. Finally, simulation results on real datasets show that our improved DP algorithm, which accounts for long trailing, facilitates a better balance between privacy and utility in federated learning models.