|
Methods based on deep learning have been widely used in various practical projects. Due to privacy policy reasons, traditional centralized learning may not be suitable for some engineering application scenarios with sensitive data, such as smart medical care, image recognition, etc. Federated learning has received extensive attention as a new collaborative learning method, which can break down data barriers between different institutions and improve model performance. However, the private information of individual clients can be inferred from their shared parameters, and at the same time, the communication consumption of federated learning systems is very high due to large batches of communication interactions. This paper proposes a dynamic gradient exchange privacy-preserving federated learning framework, which combines two technical theories of differential privacy and gradient compression. During the training process, differential privacy is used to interfere with the gradient parameters of the client, and dynamic gradient exchange is used to reject part of the "lazy" client communication. Theoretical analysis and experimental results demonstrate the superiority of the dynamic gradient exchange privacy-preserving federated learning framework in terms of accuracy, privacy security, and communication savings. |
|
Keywords:computer technology; federated learning; communication compression; differential privacy |
|