Cargando…

Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning

Federated learning has a distributed collaborative training mode, widely used in IoT scenarios of edge computing intelligent services. However, federated learning is vulnerable to malicious attacks, mainly backdoor attacks. Once an edge node implements a backdoor attack, the embedded backdoor mode w...

Descripción completa

Detalles Bibliográficos
Autores principales: Yang, Jie, Zheng, Jun, Wang, Haochen, Li, Jiaxing, Sun, Haipeng, Han, Weifeng, Jiang, Nan, Tan, Yu-An
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9921795/
https://www.ncbi.nlm.nih.gov/pubmed/36772101
http://dx.doi.org/10.3390/s23031052
_version_ 1784887397301878784
author Yang, Jie
Zheng, Jun
Wang, Haochen
Li, Jiaxing
Sun, Haipeng
Han, Weifeng
Jiang, Nan
Tan, Yu-An
author_facet Yang, Jie
Zheng, Jun
Wang, Haochen
Li, Jiaxing
Sun, Haipeng
Han, Weifeng
Jiang, Nan
Tan, Yu-An
author_sort Yang, Jie
collection PubMed
description Federated learning has a distributed collaborative training mode, widely used in IoT scenarios of edge computing intelligent services. However, federated learning is vulnerable to malicious attacks, mainly backdoor attacks. Once an edge node implements a backdoor attack, the embedded backdoor mode will rapidly expand to all relevant edge nodes, which poses a considerable challenge to security-sensitive edge computing intelligent services. In the traditional edge collaborative backdoor defense method, only the cloud server is trusted by default. However, edge computing intelligent services have limited bandwidth and unstable network connections, which make it impossible for edge devices to retrain their models or update the global model. Therefore, it is crucial to detect whether the data of edge nodes are polluted in time. This paper proposes a layered defense framework for edge-computing intelligent services. At the edge, we combine the gradient rising strategy and attention self-distillation mechanism to maximize the correlation between edge device data and edge object categories and train a clean model as much as possible. On the server side, we first implement a two-layer backdoor detection mechanism to eliminate backdoor updates and use the attention self-distillation mechanism to restore the model performance. Our results show that the two-stage defense mode is more suitable for the security protection of edge computing intelligent services. It can not only weaken the effectiveness of the backdoor at the edge end but also conduct this defense at the server end, making the model more secure. The precision of our model on the main task is almost the same as that of the clean model.
format Online
Article
Text
id pubmed-9921795
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-99217952023-02-12 Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning Yang, Jie Zheng, Jun Wang, Haochen Li, Jiaxing Sun, Haipeng Han, Weifeng Jiang, Nan Tan, Yu-An Sensors (Basel) Article Federated learning has a distributed collaborative training mode, widely used in IoT scenarios of edge computing intelligent services. However, federated learning is vulnerable to malicious attacks, mainly backdoor attacks. Once an edge node implements a backdoor attack, the embedded backdoor mode will rapidly expand to all relevant edge nodes, which poses a considerable challenge to security-sensitive edge computing intelligent services. In the traditional edge collaborative backdoor defense method, only the cloud server is trusted by default. However, edge computing intelligent services have limited bandwidth and unstable network connections, which make it impossible for edge devices to retrain their models or update the global model. Therefore, it is crucial to detect whether the data of edge nodes are polluted in time. This paper proposes a layered defense framework for edge-computing intelligent services. At the edge, we combine the gradient rising strategy and attention self-distillation mechanism to maximize the correlation between edge device data and edge object categories and train a clean model as much as possible. On the server side, we first implement a two-layer backdoor detection mechanism to eliminate backdoor updates and use the attention self-distillation mechanism to restore the model performance. Our results show that the two-stage defense mode is more suitable for the security protection of edge computing intelligent services. It can not only weaken the effectiveness of the backdoor at the edge end but also conduct this defense at the server end, making the model more secure. The precision of our model on the main task is almost the same as that of the clean model. MDPI 2023-01-17 /pmc/articles/PMC9921795/ /pubmed/36772101 http://dx.doi.org/10.3390/s23031052 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Yang, Jie
Zheng, Jun
Wang, Haochen
Li, Jiaxing
Sun, Haipeng
Han, Weifeng
Jiang, Nan
Tan, Yu-An
Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning
title Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning
title_full Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning
title_fullStr Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning
title_full_unstemmed Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning
title_short Edge-Cloud Collaborative Defense against Backdoor Attacks in Federated Learning
title_sort edge-cloud collaborative defense against backdoor attacks in federated learning
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9921795/
https://www.ncbi.nlm.nih.gov/pubmed/36772101
http://dx.doi.org/10.3390/s23031052
work_keys_str_mv AT yangjie edgecloudcollaborativedefenseagainstbackdoorattacksinfederatedlearning
AT zhengjun edgecloudcollaborativedefenseagainstbackdoorattacksinfederatedlearning
AT wanghaochen edgecloudcollaborativedefenseagainstbackdoorattacksinfederatedlearning
AT lijiaxing edgecloudcollaborativedefenseagainstbackdoorattacksinfederatedlearning
AT sunhaipeng edgecloudcollaborativedefenseagainstbackdoorattacksinfederatedlearning
AT hanweifeng edgecloudcollaborativedefenseagainstbackdoorattacksinfederatedlearning
AT jiangnan edgecloudcollaborativedefenseagainstbackdoorattacksinfederatedlearning
AT tanyuan edgecloudcollaborativedefenseagainstbackdoorattacksinfederatedlearning