|
Federated learning has set off a wave in the application field of deep learning since it was proposed in 2016. It is very advisable for modeling on separate and independent datasets that contain sensitive information, thus breaking the data islets faced by most technology companies. However, the combined contribution of differential privacy and federated learning is not satisfactory in current study. In this paper, we bring forward a new algorithm of federated learning where a relaxation of differential privacy termed f-differential privacy(f-DP) is utilized for detailed and rigorous privacy analysis to enhance privacy protection. f-differential privacy retains the explanation of the hypothesis testing and designs a trade-off function to establish a connection between the type I and the type II errors, thereby tracking privacy loss and measuring the privacy leakage more accurately. We prove that adopting such an approach can achieve tighter privacy constraints and a more effective privacy guarantee, when remains on a certain high accuracy shared model compared to the centralized differential privacy in previous frameworks of federated learning, so as to bring a better balance between data availability and private security to meet our expectations. |
|
Keywords:Artificial Intelligence, Federated learning, hypothesis testing, differential privacy, trade-off function |
|