Cargando…

Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups

Deep learning pervades heavy data-driven disciplines in research and development. The Internet of Things and sensor systems, which enable smart environments and services, are settings where deep learning can provide invaluable utility. However, the data in these systems are very often directly or in...

Descripción completa

Detalles Bibliográficos
Autores principales: Wainakh, Aidmar, Zimmer, Ephraim, Subedi, Sandeep, Keim, Jens, Grube, Tim, Karuppayah, Shankar, Sanchez Guinea, Alejandro, Mühlhäuser, Max
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9824092/
https://www.ncbi.nlm.nih.gov/pubmed/36616629
http://dx.doi.org/10.3390/s23010031
_version_ 1784866323898040320
author Wainakh, Aidmar
Zimmer, Ephraim
Subedi, Sandeep
Keim, Jens
Grube, Tim
Karuppayah, Shankar
Sanchez Guinea, Alejandro
Mühlhäuser, Max
author_facet Wainakh, Aidmar
Zimmer, Ephraim
Subedi, Sandeep
Keim, Jens
Grube, Tim
Karuppayah, Shankar
Sanchez Guinea, Alejandro
Mühlhäuser, Max
author_sort Wainakh, Aidmar
collection PubMed
description Deep learning pervades heavy data-driven disciplines in research and development. The Internet of Things and sensor systems, which enable smart environments and services, are settings where deep learning can provide invaluable utility. However, the data in these systems are very often directly or indirectly related to people, which raises privacy concerns. Federated learning (FL) mitigates some of these concerns and empowers deep learning in sensor-driven environments by enabling multiple entities to collaboratively train a machine learning model without sharing their data. Nevertheless, a number of works in the literature propose attacks that can manipulate the model and disclose information about the training data in FL. As a result, there has been a growing belief that FL is highly vulnerable to severe attacks. Although these attacks do indeed highlight security and privacy risks in FL, some of them may not be as effective in production deployment because they are feasible only given special—sometimes impractical—assumptions. In this paper, we investigate this issue by conducting a quantitative analysis of the attacks against FL and their evaluation settings in 48 papers. This analysis is the first of its kind to reveal several research gaps with regard to the types and architectures of target models. Additionally, the quantitative analysis allows us to highlight unrealistic assumptions in some attacks related to the hyper-parameters of the model and data distribution. Furthermore, we identify fallacies in the evaluation of attacks which raise questions about the generalizability of the conclusions. As a remedy, we propose a set of recommendations to promote adequate evaluations.
format Online
Article
Text
id pubmed-9824092
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-98240922023-01-08 Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups Wainakh, Aidmar Zimmer, Ephraim Subedi, Sandeep Keim, Jens Grube, Tim Karuppayah, Shankar Sanchez Guinea, Alejandro Mühlhäuser, Max Sensors (Basel) Article Deep learning pervades heavy data-driven disciplines in research and development. The Internet of Things and sensor systems, which enable smart environments and services, are settings where deep learning can provide invaluable utility. However, the data in these systems are very often directly or indirectly related to people, which raises privacy concerns. Federated learning (FL) mitigates some of these concerns and empowers deep learning in sensor-driven environments by enabling multiple entities to collaboratively train a machine learning model without sharing their data. Nevertheless, a number of works in the literature propose attacks that can manipulate the model and disclose information about the training data in FL. As a result, there has been a growing belief that FL is highly vulnerable to severe attacks. Although these attacks do indeed highlight security and privacy risks in FL, some of them may not be as effective in production deployment because they are feasible only given special—sometimes impractical—assumptions. In this paper, we investigate this issue by conducting a quantitative analysis of the attacks against FL and their evaluation settings in 48 papers. This analysis is the first of its kind to reveal several research gaps with regard to the types and architectures of target models. Additionally, the quantitative analysis allows us to highlight unrealistic assumptions in some attacks related to the hyper-parameters of the model and data distribution. Furthermore, we identify fallacies in the evaluation of attacks which raise questions about the generalizability of the conclusions. As a remedy, we propose a set of recommendations to promote adequate evaluations. MDPI 2022-12-20 /pmc/articles/PMC9824092/ /pubmed/36616629 http://dx.doi.org/10.3390/s23010031 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Wainakh, Aidmar
Zimmer, Ephraim
Subedi, Sandeep
Keim, Jens
Grube, Tim
Karuppayah, Shankar
Sanchez Guinea, Alejandro
Mühlhäuser, Max
Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups
title Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups
title_full Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups
title_fullStr Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups
title_full_unstemmed Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups
title_short Federated Learning Attacks Revisited: A Critical Discussion of Gaps, Assumptions, and Evaluation Setups
title_sort federated learning attacks revisited: a critical discussion of gaps, assumptions, and evaluation setups
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9824092/
https://www.ncbi.nlm.nih.gov/pubmed/36616629
http://dx.doi.org/10.3390/s23010031
work_keys_str_mv AT wainakhaidmar federatedlearningattacksrevisitedacriticaldiscussionofgapsassumptionsandevaluationsetups
AT zimmerephraim federatedlearningattacksrevisitedacriticaldiscussionofgapsassumptionsandevaluationsetups
AT subedisandeep federatedlearningattacksrevisitedacriticaldiscussionofgapsassumptionsandevaluationsetups
AT keimjens federatedlearningattacksrevisitedacriticaldiscussionofgapsassumptionsandevaluationsetups
AT grubetim federatedlearningattacksrevisitedacriticaldiscussionofgapsassumptionsandevaluationsetups
AT karuppayahshankar federatedlearningattacksrevisitedacriticaldiscussionofgapsassumptionsandevaluationsetups
AT sanchezguineaalejandro federatedlearningattacksrevisitedacriticaldiscussionofgapsassumptionsandevaluationsetups
AT muhlhausermax federatedlearningattacksrevisitedacriticaldiscussionofgapsassumptionsandevaluationsetups