Cargando…
Recover User’s Private Training Image Data by Gradient in Federated Learning
Exchanging gradient is a widely used method in modern multinode machine learning system (e.g., distributed training, Federated Learning). Gradients and weights of model has been presumed to be safe to delivery. However, some studies have shown that gradient inversion technique can reconstruct the in...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9573526/ https://www.ncbi.nlm.nih.gov/pubmed/36236251 http://dx.doi.org/10.3390/s22197157 |
_version_ | 1784810892887588864 |
---|---|
author | Gong, Haimei Jiang, Liangjun Liu, Xiaoyang Wang, Yuanqi Wang, Lei Zhang, Ke |
author_facet | Gong, Haimei Jiang, Liangjun Liu, Xiaoyang Wang, Yuanqi Wang, Lei Zhang, Ke |
author_sort | Gong, Haimei |
collection | PubMed |
description | Exchanging gradient is a widely used method in modern multinode machine learning system (e.g., distributed training, Federated Learning). Gradients and weights of model has been presumed to be safe to delivery. However, some studies have shown that gradient inversion technique can reconstruct the input images on the pixel level. In this study, we review the research work of data leakage by gradient inversion technique and categorize existing works into three groups: (i) Bias Attacks, (ii) Optimization-Based Attacks, and (iii) Linear Equation Solver Attacks. According to the characteristics of these algorithms, we propose one privacy attack system, i.e., Single-Sample Reconstruction Attack System (SSRAS). This system can carry out image reconstruction regardless of whether the label can be determined. It can extends gradient inversion attack from a fully connected layer with bias terms to attack a fully connected layer and convolutional neural network with or without bias terms. We also propose Improved R-GAP Alogrithm, which can utlize DLG algorithm to derive ground truth. Furthermore, we introduce Rank Analysis Index (RA-I) to measure the possible of whether the user’s raw image data can be reconstructed. This rank analysis derive virtual constraints [Formula: see text] from weights. Compared with the most representative attack algorithms, this reconstruction attack system can recover a user’s private training image with high fidelity and attack success rate. Experimental results also show the superiority of the attack system over some other state-of-the-art attack algorithms. |
format | Online Article Text |
id | pubmed-9573526 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-95735262022-10-17 Recover User’s Private Training Image Data by Gradient in Federated Learning Gong, Haimei Jiang, Liangjun Liu, Xiaoyang Wang, Yuanqi Wang, Lei Zhang, Ke Sensors (Basel) Article Exchanging gradient is a widely used method in modern multinode machine learning system (e.g., distributed training, Federated Learning). Gradients and weights of model has been presumed to be safe to delivery. However, some studies have shown that gradient inversion technique can reconstruct the input images on the pixel level. In this study, we review the research work of data leakage by gradient inversion technique and categorize existing works into three groups: (i) Bias Attacks, (ii) Optimization-Based Attacks, and (iii) Linear Equation Solver Attacks. According to the characteristics of these algorithms, we propose one privacy attack system, i.e., Single-Sample Reconstruction Attack System (SSRAS). This system can carry out image reconstruction regardless of whether the label can be determined. It can extends gradient inversion attack from a fully connected layer with bias terms to attack a fully connected layer and convolutional neural network with or without bias terms. We also propose Improved R-GAP Alogrithm, which can utlize DLG algorithm to derive ground truth. Furthermore, we introduce Rank Analysis Index (RA-I) to measure the possible of whether the user’s raw image data can be reconstructed. This rank analysis derive virtual constraints [Formula: see text] from weights. Compared with the most representative attack algorithms, this reconstruction attack system can recover a user’s private training image with high fidelity and attack success rate. Experimental results also show the superiority of the attack system over some other state-of-the-art attack algorithms. MDPI 2022-09-21 /pmc/articles/PMC9573526/ /pubmed/36236251 http://dx.doi.org/10.3390/s22197157 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Gong, Haimei Jiang, Liangjun Liu, Xiaoyang Wang, Yuanqi Wang, Lei Zhang, Ke Recover User’s Private Training Image Data by Gradient in Federated Learning |
title | Recover User’s Private Training Image Data by Gradient in Federated Learning |
title_full | Recover User’s Private Training Image Data by Gradient in Federated Learning |
title_fullStr | Recover User’s Private Training Image Data by Gradient in Federated Learning |
title_full_unstemmed | Recover User’s Private Training Image Data by Gradient in Federated Learning |
title_short | Recover User’s Private Training Image Data by Gradient in Federated Learning |
title_sort | recover user’s private training image data by gradient in federated learning |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9573526/ https://www.ncbi.nlm.nih.gov/pubmed/36236251 http://dx.doi.org/10.3390/s22197157 |
work_keys_str_mv | AT gonghaimei recoverusersprivatetrainingimagedatabygradientinfederatedlearning AT jiangliangjun recoverusersprivatetrainingimagedatabygradientinfederatedlearning AT liuxiaoyang recoverusersprivatetrainingimagedatabygradientinfederatedlearning AT wangyuanqi recoverusersprivatetrainingimagedatabygradientinfederatedlearning AT wanglei recoverusersprivatetrainingimagedatabygradientinfederatedlearning AT zhangke recoverusersprivatetrainingimagedatabygradientinfederatedlearning |