Cargando…

Defending against Reconstruction Attacks through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-ray Data

Privacy regulations and the physical distribution of heterogeneous data are often primary concerns for the development of deep learning models in a medical context. This paper evaluates the feasibility of differentially private federated learning for chest X-ray classification as a defense against d...

Descripción completa

Detalles Bibliográficos
Autores principales: Ziegler, Joceline, Pfitzner, Bjarne, Schulz, Heinrich, Saalbach, Axel, Arnrich, Bert
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9320045/
https://www.ncbi.nlm.nih.gov/pubmed/35890875
http://dx.doi.org/10.3390/s22145195
_version_ 1784755697551933440
author Ziegler, Joceline
Pfitzner, Bjarne
Schulz, Heinrich
Saalbach, Axel
Arnrich, Bert
author_facet Ziegler, Joceline
Pfitzner, Bjarne
Schulz, Heinrich
Saalbach, Axel
Arnrich, Bert
author_sort Ziegler, Joceline
collection PubMed
description Privacy regulations and the physical distribution of heterogeneous data are often primary concerns for the development of deep learning models in a medical context. This paper evaluates the feasibility of differentially private federated learning for chest X-ray classification as a defense against data privacy attacks. To the best of our knowledge, we are the first to directly compare the impact of differentially private training on two different neural network architectures, DenseNet121 and ResNet50. Extending the federated learning environments previously analyzed in terms of privacy, we simulated a heterogeneous and imbalanced federated setting by distributing images from the public CheXpert and Mendeley chest X-ray datasets unevenly among 36 clients. Both non-private baseline models achieved an area under the receiver operating characteristic curve (AUC) of [Formula: see text] on the binary classification task of detecting the presence of a medical finding. We demonstrate that both model architectures are vulnerable to privacy violation by applying image reconstruction attacks to local model updates from individual clients. The attack was particularly successful during later training stages. To mitigate the risk of a privacy breach, we integrated Rényi differential privacy with a Gaussian noise mechanism into local model training. We evaluate model performance and attack vulnerability for privacy budgets [Formula: see text]. The DenseNet121 achieved the best utility-privacy trade-off with an AUC of [Formula: see text] for [Formula: see text]. Model performance deteriorated slightly for individual clients compared to the non-private baseline. The ResNet50 only reached an AUC of [Formula: see text] in the same privacy setting. Its performance was inferior to that of the DenseNet121 for all considered privacy constraints, suggesting that the DenseNet121 architecture is more robust to differentially private training.
format Online
Article
Text
id pubmed-9320045
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-93200452022-07-27 Defending against Reconstruction Attacks through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-ray Data Ziegler, Joceline Pfitzner, Bjarne Schulz, Heinrich Saalbach, Axel Arnrich, Bert Sensors (Basel) Article Privacy regulations and the physical distribution of heterogeneous data are often primary concerns for the development of deep learning models in a medical context. This paper evaluates the feasibility of differentially private federated learning for chest X-ray classification as a defense against data privacy attacks. To the best of our knowledge, we are the first to directly compare the impact of differentially private training on two different neural network architectures, DenseNet121 and ResNet50. Extending the federated learning environments previously analyzed in terms of privacy, we simulated a heterogeneous and imbalanced federated setting by distributing images from the public CheXpert and Mendeley chest X-ray datasets unevenly among 36 clients. Both non-private baseline models achieved an area under the receiver operating characteristic curve (AUC) of [Formula: see text] on the binary classification task of detecting the presence of a medical finding. We demonstrate that both model architectures are vulnerable to privacy violation by applying image reconstruction attacks to local model updates from individual clients. The attack was particularly successful during later training stages. To mitigate the risk of a privacy breach, we integrated Rényi differential privacy with a Gaussian noise mechanism into local model training. We evaluate model performance and attack vulnerability for privacy budgets [Formula: see text]. The DenseNet121 achieved the best utility-privacy trade-off with an AUC of [Formula: see text] for [Formula: see text]. Model performance deteriorated slightly for individual clients compared to the non-private baseline. The ResNet50 only reached an AUC of [Formula: see text] in the same privacy setting. Its performance was inferior to that of the DenseNet121 for all considered privacy constraints, suggesting that the DenseNet121 architecture is more robust to differentially private training. MDPI 2022-07-11 /pmc/articles/PMC9320045/ /pubmed/35890875 http://dx.doi.org/10.3390/s22145195 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Ziegler, Joceline
Pfitzner, Bjarne
Schulz, Heinrich
Saalbach, Axel
Arnrich, Bert
Defending against Reconstruction Attacks through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-ray Data
title Defending against Reconstruction Attacks through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-ray Data
title_full Defending against Reconstruction Attacks through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-ray Data
title_fullStr Defending against Reconstruction Attacks through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-ray Data
title_full_unstemmed Defending against Reconstruction Attacks through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-ray Data
title_short Defending against Reconstruction Attacks through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-ray Data
title_sort defending against reconstruction attacks through differentially private federated learning for classification of heterogeneous chest x-ray data
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9320045/
https://www.ncbi.nlm.nih.gov/pubmed/35890875
http://dx.doi.org/10.3390/s22145195
work_keys_str_mv AT zieglerjoceline defendingagainstreconstructionattacksthroughdifferentiallyprivatefederatedlearningforclassificationofheterogeneouschestxraydata
AT pfitznerbjarne defendingagainstreconstructionattacksthroughdifferentiallyprivatefederatedlearningforclassificationofheterogeneouschestxraydata
AT schulzheinrich defendingagainstreconstructionattacksthroughdifferentiallyprivatefederatedlearningforclassificationofheterogeneouschestxraydata
AT saalbachaxel defendingagainstreconstructionattacksthroughdifferentiallyprivatefederatedlearningforclassificationofheterogeneouschestxraydata
AT arnrichbert defendingagainstreconstructionattacksthroughdifferentiallyprivatefederatedlearningforclassificationofheterogeneouschestxraydata