Cargando…
Wasserstein Distance-Based Deep Leakage from Gradients
Federated learning protects the privacy information in the data set by sharing the average gradient. However, “Deep Leakage from Gradient” (DLG) algorithm as a gradient-based feature reconstruction attack can recover privacy training data using gradients shared in federated learning, resulting in pr...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10217429/ https://www.ncbi.nlm.nih.gov/pubmed/37238565 http://dx.doi.org/10.3390/e25050810 |
_version_ | 1785048535792615424 |
---|---|
author | Wang, Zifan Peng, Changgen He, Xing Tan, Weijie |
author_facet | Wang, Zifan Peng, Changgen He, Xing Tan, Weijie |
author_sort | Wang, Zifan |
collection | PubMed |
description | Federated learning protects the privacy information in the data set by sharing the average gradient. However, “Deep Leakage from Gradient” (DLG) algorithm as a gradient-based feature reconstruction attack can recover privacy training data using gradients shared in federated learning, resulting in private information leakage. However, the algorithm has the disadvantages of slow model convergence and poor inverse generated images accuracy. To address these issues, a Wasserstein distance-based DLG method is proposed, named WDLG. The WDLG method uses Wasserstein distance as the training loss function achieved to improve the inverse image quality and the model convergence. The hard-to-calculate Wasserstein distance is converted to be calculated iteratively using the Lipschit condition and Kantorovich–Rubinstein duality. Theoretical analysis proves the differentiability and continuity of Wasserstein distance. Finally, experiment results show that the WDLG algorithm is superior to DLG in training speed and inversion image quality. At the same time, we prove through the experiments that differential privacy can be used for disturbance protection, which provides some ideas for the development of a deep learning framework to protect privacy. |
format | Online Article Text |
id | pubmed-10217429 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-102174292023-05-27 Wasserstein Distance-Based Deep Leakage from Gradients Wang, Zifan Peng, Changgen He, Xing Tan, Weijie Entropy (Basel) Article Federated learning protects the privacy information in the data set by sharing the average gradient. However, “Deep Leakage from Gradient” (DLG) algorithm as a gradient-based feature reconstruction attack can recover privacy training data using gradients shared in federated learning, resulting in private information leakage. However, the algorithm has the disadvantages of slow model convergence and poor inverse generated images accuracy. To address these issues, a Wasserstein distance-based DLG method is proposed, named WDLG. The WDLG method uses Wasserstein distance as the training loss function achieved to improve the inverse image quality and the model convergence. The hard-to-calculate Wasserstein distance is converted to be calculated iteratively using the Lipschit condition and Kantorovich–Rubinstein duality. Theoretical analysis proves the differentiability and continuity of Wasserstein distance. Finally, experiment results show that the WDLG algorithm is superior to DLG in training speed and inversion image quality. At the same time, we prove through the experiments that differential privacy can be used for disturbance protection, which provides some ideas for the development of a deep learning framework to protect privacy. MDPI 2023-05-17 /pmc/articles/PMC10217429/ /pubmed/37238565 http://dx.doi.org/10.3390/e25050810 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Wang, Zifan Peng, Changgen He, Xing Tan, Weijie Wasserstein Distance-Based Deep Leakage from Gradients |
title | Wasserstein Distance-Based Deep Leakage from Gradients |
title_full | Wasserstein Distance-Based Deep Leakage from Gradients |
title_fullStr | Wasserstein Distance-Based Deep Leakage from Gradients |
title_full_unstemmed | Wasserstein Distance-Based Deep Leakage from Gradients |
title_short | Wasserstein Distance-Based Deep Leakage from Gradients |
title_sort | wasserstein distance-based deep leakage from gradients |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10217429/ https://www.ncbi.nlm.nih.gov/pubmed/37238565 http://dx.doi.org/10.3390/e25050810 |
work_keys_str_mv | AT wangzifan wassersteindistancebaseddeepleakagefromgradients AT pengchanggen wassersteindistancebaseddeepleakagefromgradients AT hexing wassersteindistancebaseddeepleakagefromgradients AT tanweijie wassersteindistancebaseddeepleakagefromgradients |