Cargando…

Adversarial Attacks on Medical Image Classification

SIMPLE SUMMARY: As we increasingly rely on advanced imaging for medical diagnosis, it’s vital that our computer programs can accurately interpret these images. Even a single mistaken pixel can lead to wrong predictions, potentially causing incorrect medical decisions. This study looks into how these...

Descripción completa

Detalles Bibliográficos
Autores principales: Tsai, Min-Jen, Lin, Ping-Yi, Lee, Ming-En
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10487122/
https://www.ncbi.nlm.nih.gov/pubmed/37686504
http://dx.doi.org/10.3390/cancers15174228
_version_ 1785103163592802304
author Tsai, Min-Jen
Lin, Ping-Yi
Lee, Ming-En
author_facet Tsai, Min-Jen
Lin, Ping-Yi
Lee, Ming-En
author_sort Tsai, Min-Jen
collection PubMed
description SIMPLE SUMMARY: As we increasingly rely on advanced imaging for medical diagnosis, it’s vital that our computer programs can accurately interpret these images. Even a single mistaken pixel can lead to wrong predictions, potentially causing incorrect medical decisions. This study looks into how these tiny mistakes can trick our advanced algorithms. By changing just one or a few pixels on medical images, we tested how various computer models handled these changes. The findings showed that even small disruptions made it hard for the models to correctly interpret the images. This raises concerns about how reliable our current computer-aided diagnostic tools are and underscores the need for models that can resist such small disturbances. ABSTRACT: Due to the growing number of medical images being produced by diverse radiological imaging techniques, radiography examinations with computer-aided diagnoses could greatly assist clinical applications. However, an imaging facility with just a one-pixel inaccuracy will lead to the inaccurate prediction of medical images. Misclassification may lead to the wrong clinical decision. This scenario is similar to the adversarial attacks on deep learning models. Therefore, one-pixel and multi-pixel level attacks on a Deep Neural Network (DNN) model trained on various medical image datasets are investigated in this study. Common multiclass and multi-label datasets are examined for one-pixel type attacks. Moreover, different experiments are conducted in order to determine how changing the number of pixels in the image may affect the classification performance and robustness of diverse DNN models. The experimental results show that it was difficult for the medical images to survive the pixel attacks, raising the issue of the accuracy of medical image classification and the importance of the model’s ability to resist these attacks for a computer-aided diagnosis.
format Online
Article
Text
id pubmed-10487122
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-104871222023-09-09 Adversarial Attacks on Medical Image Classification Tsai, Min-Jen Lin, Ping-Yi Lee, Ming-En Cancers (Basel) Article SIMPLE SUMMARY: As we increasingly rely on advanced imaging for medical diagnosis, it’s vital that our computer programs can accurately interpret these images. Even a single mistaken pixel can lead to wrong predictions, potentially causing incorrect medical decisions. This study looks into how these tiny mistakes can trick our advanced algorithms. By changing just one or a few pixels on medical images, we tested how various computer models handled these changes. The findings showed that even small disruptions made it hard for the models to correctly interpret the images. This raises concerns about how reliable our current computer-aided diagnostic tools are and underscores the need for models that can resist such small disturbances. ABSTRACT: Due to the growing number of medical images being produced by diverse radiological imaging techniques, radiography examinations with computer-aided diagnoses could greatly assist clinical applications. However, an imaging facility with just a one-pixel inaccuracy will lead to the inaccurate prediction of medical images. Misclassification may lead to the wrong clinical decision. This scenario is similar to the adversarial attacks on deep learning models. Therefore, one-pixel and multi-pixel level attacks on a Deep Neural Network (DNN) model trained on various medical image datasets are investigated in this study. Common multiclass and multi-label datasets are examined for one-pixel type attacks. Moreover, different experiments are conducted in order to determine how changing the number of pixels in the image may affect the classification performance and robustness of diverse DNN models. The experimental results show that it was difficult for the medical images to survive the pixel attacks, raising the issue of the accuracy of medical image classification and the importance of the model’s ability to resist these attacks for a computer-aided diagnosis. MDPI 2023-08-23 /pmc/articles/PMC10487122/ /pubmed/37686504 http://dx.doi.org/10.3390/cancers15174228 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Tsai, Min-Jen
Lin, Ping-Yi
Lee, Ming-En
Adversarial Attacks on Medical Image Classification
title Adversarial Attacks on Medical Image Classification
title_full Adversarial Attacks on Medical Image Classification
title_fullStr Adversarial Attacks on Medical Image Classification
title_full_unstemmed Adversarial Attacks on Medical Image Classification
title_short Adversarial Attacks on Medical Image Classification
title_sort adversarial attacks on medical image classification
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10487122/
https://www.ncbi.nlm.nih.gov/pubmed/37686504
http://dx.doi.org/10.3390/cancers15174228
work_keys_str_mv AT tsaiminjen adversarialattacksonmedicalimageclassification
AT linpingyi adversarialattacksonmedicalimageclassification
AT leemingen adversarialattacksonmedicalimageclassification