Cargando…

The Threat of Adversarial Attack on a COVID-19 CT Image-Based Deep Learning System

The coronavirus disease 2019 (COVID-19) rapidly spread around the world, and resulted in a global pandemic. Applying artificial intelligence to COVID-19 research can produce very exciting results. However, most research has focused on applying AI techniques in the study of COVID-19, but has ignored...

Descripción completa

Detalles Bibliográficos
Autores principales: Li, Yang, Liu, Shaoying
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9952300/
https://www.ncbi.nlm.nih.gov/pubmed/36829688
http://dx.doi.org/10.3390/bioengineering10020194
_version_ 1784893596595388416
author Li, Yang
Liu, Shaoying
author_facet Li, Yang
Liu, Shaoying
author_sort Li, Yang
collection PubMed
description The coronavirus disease 2019 (COVID-19) rapidly spread around the world, and resulted in a global pandemic. Applying artificial intelligence to COVID-19 research can produce very exciting results. However, most research has focused on applying AI techniques in the study of COVID-19, but has ignored the security and reliability of AI systems. In this paper, we explore adversarial attacks on a deep learning system based on COVID-19 CT images with the aim of helping to address this problem. Firstly, we built a deep learning system that could identify COVID-19 CT images and non-COVID-19 CT images with an average accuracy of 76.27%. Secondly, we attacked the pretrained model with an adversarial attack algorithm, i.e., FGSM, to cause the COVID-19 deep learning system to misclassify the CT images, and the classification accuracy of non-COVID-19 CT images dropped from 80% to 0%. Finally, in response to this attack, we proposed how a more secure and reliable deep learning model based on COVID-19 medical images could be built. This research is based on a COVID-19 CT image recognition system, which studies the security of a COVID-19 CT image-based deep learning system. We hope to draw more researchers’ attention to the security and reliability of medical deep learning systems.
format Online
Article
Text
id pubmed-9952300
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-99523002023-02-25 The Threat of Adversarial Attack on a COVID-19 CT Image-Based Deep Learning System Li, Yang Liu, Shaoying Bioengineering (Basel) Article The coronavirus disease 2019 (COVID-19) rapidly spread around the world, and resulted in a global pandemic. Applying artificial intelligence to COVID-19 research can produce very exciting results. However, most research has focused on applying AI techniques in the study of COVID-19, but has ignored the security and reliability of AI systems. In this paper, we explore adversarial attacks on a deep learning system based on COVID-19 CT images with the aim of helping to address this problem. Firstly, we built a deep learning system that could identify COVID-19 CT images and non-COVID-19 CT images with an average accuracy of 76.27%. Secondly, we attacked the pretrained model with an adversarial attack algorithm, i.e., FGSM, to cause the COVID-19 deep learning system to misclassify the CT images, and the classification accuracy of non-COVID-19 CT images dropped from 80% to 0%. Finally, in response to this attack, we proposed how a more secure and reliable deep learning model based on COVID-19 medical images could be built. This research is based on a COVID-19 CT image recognition system, which studies the security of a COVID-19 CT image-based deep learning system. We hope to draw more researchers’ attention to the security and reliability of medical deep learning systems. MDPI 2023-02-02 /pmc/articles/PMC9952300/ /pubmed/36829688 http://dx.doi.org/10.3390/bioengineering10020194 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Li, Yang
Liu, Shaoying
The Threat of Adversarial Attack on a COVID-19 CT Image-Based Deep Learning System
title The Threat of Adversarial Attack on a COVID-19 CT Image-Based Deep Learning System
title_full The Threat of Adversarial Attack on a COVID-19 CT Image-Based Deep Learning System
title_fullStr The Threat of Adversarial Attack on a COVID-19 CT Image-Based Deep Learning System
title_full_unstemmed The Threat of Adversarial Attack on a COVID-19 CT Image-Based Deep Learning System
title_short The Threat of Adversarial Attack on a COVID-19 CT Image-Based Deep Learning System
title_sort threat of adversarial attack on a covid-19 ct image-based deep learning system
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9952300/
https://www.ncbi.nlm.nih.gov/pubmed/36829688
http://dx.doi.org/10.3390/bioengineering10020194
work_keys_str_mv AT liyang thethreatofadversarialattackonacovid19ctimagebaseddeeplearningsystem
AT liushaoying thethreatofadversarialattackonacovid19ctimagebaseddeeplearningsystem
AT liyang threatofadversarialattackonacovid19ctimagebaseddeeplearningsystem
AT liushaoying threatofadversarialattackonacovid19ctimagebaseddeeplearningsystem