Home > Papers

 
 
The Analysis of Adversarial Robustness in Federated Learning Based on Differential Privacy
Li Qi,Li Li *
School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing 100876
*Correspondence author
#Submitted by
Subject:
Funding: ***Foundation (No.00000000), *** Foundation (No.00000000)
Opened online: 1 April 2024
Accepted by: none
Citation: Li Qi,Li Li.The Analysis of Adversarial Robustness in Federated Learning Based on Differential Privacy[OL]. [ 1 April 2024] http://en.paper.edu.cn/en_releasepaper/content/4763096
 
 
Federated learning (FL) allows multiple participants to train models collaboratively by keeping their data local while exchanging updates, however, this also increases the risk of attack. Models in FL are as vulnerable as centrally trained models against adversarial examples. And with frequent data exchanging, the risk of data breaches increases. To enhance the robustness against adversarial examples, this paper introduces the differential privacy into FL, which is also an effective mechanism used for privacy protection.Many popular approaches that involve adversarial training in FL often compromise test accuracy while effect. In this work, a novel method is proposed which applies differential privacy to enhance both adversarial robustness and privacy protection level. Extensive experiments on multiple datasets demonstrate the proposed approach has a better performance compared with other baselines.
Keywords:Artificial Intelligence; Federated Learning; Differential Privacy; Adversarial Examples
 
 
 

For this paper

  • PDF (0B)
  • ● Revision 0   
  • ● Print this paper
  • ● Recommend this paper to a friend
  • ● Add to my favorite list

    Saved Papers

    Please enter a name for this paper to be shown in your personalized Saved Papers list

Tags

Add yours

Related Papers

Statistics

PDF Downloaded 10
Bookmarked 0
Recommend 0
Comments Array
Submit your papers