Cargando…
On the role of deep learning model complexity in adversarial robustness for medical images
BACKGROUND: Deep learning (DL) models are highly vulnerable to adversarial attacks for medical image classification. An adversary could modify the input data in imperceptible ways such that a model could be tricked to predict, say, an image that actually exhibits malignant tumor to a prediction that...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9208111/ https://www.ncbi.nlm.nih.gov/pubmed/35725429 http://dx.doi.org/10.1186/s12911-022-01891-w |
_version_ | 1784729671142735872 |
---|---|
author | Rodriguez, David Nayak, Tapsya Chen, Yidong Krishnan, Ram Huang, Yufei |
author_facet | Rodriguez, David Nayak, Tapsya Chen, Yidong Krishnan, Ram Huang, Yufei |
author_sort | Rodriguez, David |
collection | PubMed |
description | BACKGROUND: Deep learning (DL) models are highly vulnerable to adversarial attacks for medical image classification. An adversary could modify the input data in imperceptible ways such that a model could be tricked to predict, say, an image that actually exhibits malignant tumor to a prediction that it is benign. However, adversarial robustness of DL models for medical images is not adequately studied. DL in medicine is inundated with models of various complexity—particularly, very large models. In this work, we investigate the role of model complexity in adversarial settings. RESULTS: Consider a set of DL models that exhibit similar performances for a given task. These models are trained in the usual manner but are not trained to defend against adversarial attacks. We demonstrate that, among those models, simpler models of reduced complexity show a greater level of robustness against adversarial attacks than larger models that often tend to be used in medical applications. On the other hand, we also show that once those models undergo adversarial training, the adversarial trained medical image DL models exhibit a greater degree of robustness than the standard trained models for all model complexities. CONCLUSION: The above result has a significant practical relevance. When medical practitioners lack the expertise or resources to defend against adversarial attacks, we recommend that they select the smallest of the models that exhibit adequate performance. Such a model would be naturally more robust to adversarial attacks than the larger models. |
format | Online Article Text |
id | pubmed-9208111 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-92081112022-06-21 On the role of deep learning model complexity in adversarial robustness for medical images Rodriguez, David Nayak, Tapsya Chen, Yidong Krishnan, Ram Huang, Yufei BMC Med Inform Decis Mak Research BACKGROUND: Deep learning (DL) models are highly vulnerable to adversarial attacks for medical image classification. An adversary could modify the input data in imperceptible ways such that a model could be tricked to predict, say, an image that actually exhibits malignant tumor to a prediction that it is benign. However, adversarial robustness of DL models for medical images is not adequately studied. DL in medicine is inundated with models of various complexity—particularly, very large models. In this work, we investigate the role of model complexity in adversarial settings. RESULTS: Consider a set of DL models that exhibit similar performances for a given task. These models are trained in the usual manner but are not trained to defend against adversarial attacks. We demonstrate that, among those models, simpler models of reduced complexity show a greater level of robustness against adversarial attacks than larger models that often tend to be used in medical applications. On the other hand, we also show that once those models undergo adversarial training, the adversarial trained medical image DL models exhibit a greater degree of robustness than the standard trained models for all model complexities. CONCLUSION: The above result has a significant practical relevance. When medical practitioners lack the expertise or resources to defend against adversarial attacks, we recommend that they select the smallest of the models that exhibit adequate performance. Such a model would be naturally more robust to adversarial attacks than the larger models. BioMed Central 2022-06-20 /pmc/articles/PMC9208111/ /pubmed/35725429 http://dx.doi.org/10.1186/s12911-022-01891-w Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data. |
spellingShingle | Research Rodriguez, David Nayak, Tapsya Chen, Yidong Krishnan, Ram Huang, Yufei On the role of deep learning model complexity in adversarial robustness for medical images |
title | On the role of deep learning model complexity in adversarial robustness for medical images |
title_full | On the role of deep learning model complexity in adversarial robustness for medical images |
title_fullStr | On the role of deep learning model complexity in adversarial robustness for medical images |
title_full_unstemmed | On the role of deep learning model complexity in adversarial robustness for medical images |
title_short | On the role of deep learning model complexity in adversarial robustness for medical images |
title_sort | on the role of deep learning model complexity in adversarial robustness for medical images |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9208111/ https://www.ncbi.nlm.nih.gov/pubmed/35725429 http://dx.doi.org/10.1186/s12911-022-01891-w |
work_keys_str_mv | AT rodriguezdavid ontheroleofdeeplearningmodelcomplexityinadversarialrobustnessformedicalimages AT nayaktapsya ontheroleofdeeplearningmodelcomplexityinadversarialrobustnessformedicalimages AT chenyidong ontheroleofdeeplearningmodelcomplexityinadversarialrobustnessformedicalimages AT krishnanram ontheroleofdeeplearningmodelcomplexityinadversarialrobustnessformedicalimages AT huangyufei ontheroleofdeeplearningmodelcomplexityinadversarialrobustnessformedicalimages |