|
Federated learning (FL) allows multiple participants to train models collaboratively by keeping their data local while exchanging updates, however, this also increases the risk of attack. Models in FL are as vulnerable as centrally trained models against adversarial examples. And with frequent data exchanging, the risk of data breaches increases. To enhance the robustness against adversarial examples, this paper introduces the differential privacy into FL, which is also an effective mechanism used for privacy protection.Many popular approaches that involve adversarial training in FL often compromise test accuracy while effect. In this work, a novel method is proposed which applies differential privacy to enhance both adversarial robustness and privacy protection level. Extensive experiments on multiple datasets demonstrate the proposed approach has a better performance compared with other baselines. |
|
Keywords:Artificial Intelligence; Federated Learning; Differential Privacy; Adversarial Examples |
|