Home > Papers

 
 
Hierarchical Federated Learning with Gaussian Differential Privacy
ZHOU Tao,PENG Hai-Peng *
School of Cyberspace Security, Beijing University of Posts and Telecommunications, Beijing 100089
*Correspondence author
#Submitted by
Subject:
Funding: none
Opened online: 7 March 2022
Accepted by: none
Citation: ZHOU Tao,PENG Hai-Peng.Hierarchical Federated Learning with Gaussian Differential Privacy[OL]. [ 7 March 2022] http://en.paper.edu.cn/en_releasepaper/content/4756405
 
 
Federated learning is a privacy preserving machine learning technology. Each participant can build the model without disclosing the underlying data, and only shares the weight update and gradient information of the model with the server. However, a lot of work shows that the attackers can easily obtain the client's contributions and the relevant privacy training data from the public shared gradient, so the gradient exchange is no longer safe. In order to ensure the security of Federated learning, in the differential privacy method, noise is added to the model update to obscure the contribution of the client, thereby resisting member reasoning attacks, preventing malicious clients from knowing other client information, and ensuring private output. This paper proposes a new differential privacy aggregation scheme, which adopts a more fine-grained hierarchy update strategy. For the first time, the $f$-differential privacy ($f$-DP) method is used for the privacy analysis of federated aggregation. Adding Gaussian noise disturbance model update in order to protect the privacy of the client level. We prove that the $f$-DP differential privacy method improves the previous privacy analysis by experiments. It accurately captures the loss of privacy at every communication round in federal training, and overcome the problem of ensuring privacy at the cost of reducing model utility in most previous work. At the same time, it provides a federal model updating scheme with wider applicability and better utility. When enough users participate in federated learning, the client-level privacy guarantee is achieved while minimizing model loss.
Keywords:artificial intelligence; Federated learning; differential privacy
 
 
 

For this paper

  • PDF (0B)
  • ● Revision 0   
  • ● Print this paper
  • ● Recommend this paper to a friend
  • ● Add to my favorite list

    Saved Papers

    Please enter a name for this paper to be shown in your personalized Saved Papers list

Tags

Add yours

Related Papers

Statistics

PDF Downloaded 17
Bookmarked 0
Recommend 0
Comments Array
Submit your papers