Cargando…

Natural Images Allow Universal Adversarial Attacks on Medical Image Classification Using Deep Neural Networks with Transfer Learning

Transfer learning from natural images is used in deep neural networks (DNNs) for medical image classification to achieve a computer-aided clinical diagnosis. Although the adversarial vulnerability of DNNs hinders practical applications owing to the high stakes of diagnosis, adversarial attacks are e...

Descripción completa

Detalles Bibliográficos
Autores principales: Minagi, Akinori, Hirano, Hokuto, Takemoto, Kauzhiro
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8875959/
https://www.ncbi.nlm.nih.gov/pubmed/35200740
http://dx.doi.org/10.3390/jimaging8020038
_version_ 1784658055162494976
author Minagi, Akinori
Hirano, Hokuto
Takemoto, Kauzhiro
author_facet Minagi, Akinori
Hirano, Hokuto
Takemoto, Kauzhiro
author_sort Minagi, Akinori
collection PubMed
description Transfer learning from natural images is used in deep neural networks (DNNs) for medical image classification to achieve a computer-aided clinical diagnosis. Although the adversarial vulnerability of DNNs hinders practical applications owing to the high stakes of diagnosis, adversarial attacks are expected to be limited because training datasets (medical images), which are often required for adversarial attacks, are generally unavailable in terms of security and privacy preservation. Nevertheless, in this study, we demonstrated that adversarial attacks are also possible using natural images for medical DNN models with transfer learning, even if such medical images are unavailable; in particular, we showed that universal adversarial perturbations (UAPs) can also be generated from natural images. UAPs from natural images are useful for both non-targeted and targeted attacks. The performance of UAPs from natural images was significantly higher than that of random controls. The use of transfer learning causes a security hole, which decreases the reliability and safety of computer-based disease diagnosis. Model training from random initialization reduced the performance of UAPs from natural images; however, it did not completely avoid vulnerability to UAPs. The vulnerability of UAPs to natural images is expected to become a significant security threat.
format Online
Article
Text
id pubmed-8875959
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-88759592022-02-26 Natural Images Allow Universal Adversarial Attacks on Medical Image Classification Using Deep Neural Networks with Transfer Learning Minagi, Akinori Hirano, Hokuto Takemoto, Kauzhiro J Imaging Article Transfer learning from natural images is used in deep neural networks (DNNs) for medical image classification to achieve a computer-aided clinical diagnosis. Although the adversarial vulnerability of DNNs hinders practical applications owing to the high stakes of diagnosis, adversarial attacks are expected to be limited because training datasets (medical images), which are often required for adversarial attacks, are generally unavailable in terms of security and privacy preservation. Nevertheless, in this study, we demonstrated that adversarial attacks are also possible using natural images for medical DNN models with transfer learning, even if such medical images are unavailable; in particular, we showed that universal adversarial perturbations (UAPs) can also be generated from natural images. UAPs from natural images are useful for both non-targeted and targeted attacks. The performance of UAPs from natural images was significantly higher than that of random controls. The use of transfer learning causes a security hole, which decreases the reliability and safety of computer-based disease diagnosis. Model training from random initialization reduced the performance of UAPs from natural images; however, it did not completely avoid vulnerability to UAPs. The vulnerability of UAPs to natural images is expected to become a significant security threat. MDPI 2022-02-04 /pmc/articles/PMC8875959/ /pubmed/35200740 http://dx.doi.org/10.3390/jimaging8020038 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Minagi, Akinori
Hirano, Hokuto
Takemoto, Kauzhiro
Natural Images Allow Universal Adversarial Attacks on Medical Image Classification Using Deep Neural Networks with Transfer Learning
title Natural Images Allow Universal Adversarial Attacks on Medical Image Classification Using Deep Neural Networks with Transfer Learning
title_full Natural Images Allow Universal Adversarial Attacks on Medical Image Classification Using Deep Neural Networks with Transfer Learning
title_fullStr Natural Images Allow Universal Adversarial Attacks on Medical Image Classification Using Deep Neural Networks with Transfer Learning
title_full_unstemmed Natural Images Allow Universal Adversarial Attacks on Medical Image Classification Using Deep Neural Networks with Transfer Learning
title_short Natural Images Allow Universal Adversarial Attacks on Medical Image Classification Using Deep Neural Networks with Transfer Learning
title_sort natural images allow universal adversarial attacks on medical image classification using deep neural networks with transfer learning
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8875959/
https://www.ncbi.nlm.nih.gov/pubmed/35200740
http://dx.doi.org/10.3390/jimaging8020038
work_keys_str_mv AT minagiakinori naturalimagesallowuniversaladversarialattacksonmedicalimageclassificationusingdeepneuralnetworkswithtransferlearning
AT hiranohokuto naturalimagesallowuniversaladversarialattacksonmedicalimageclassificationusingdeepneuralnetworkswithtransferlearning
AT takemotokauzhiro naturalimagesallowuniversaladversarialattacksonmedicalimageclassificationusingdeepneuralnetworkswithtransferlearning