Cargando…

Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks

Owing the epidemic of the novel coronavirus disease 2019 (COVID-19), chest X-ray computed tomography imaging is being used for effectively screening COVID-19 patients. The development of computer-aided systems based on deep neural networks (DNNs) has become an advanced open source to rapidly and acc...

Descripción completa

Detalles Bibliográficos
Autores principales: Hirano, Hokuto, Koga, Kazuki, Takemoto, Kazuhiro
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7745979/
https://www.ncbi.nlm.nih.gov/pubmed/33332412
http://dx.doi.org/10.1371/journal.pone.0243963
_version_ 1783624699166588928
author Hirano, Hokuto
Koga, Kazuki
Takemoto, Kazuhiro
author_facet Hirano, Hokuto
Koga, Kazuki
Takemoto, Kazuhiro
author_sort Hirano, Hokuto
collection PubMed
description Owing the epidemic of the novel coronavirus disease 2019 (COVID-19), chest X-ray computed tomography imaging is being used for effectively screening COVID-19 patients. The development of computer-aided systems based on deep neural networks (DNNs) has become an advanced open source to rapidly and accurately detect COVID-19 cases because the need for expert radiologists, who are limited in number, forms a bottleneck for screening. However, thus far, the vulnerability of DNN-based systems has been poorly evaluated, although realistic and high-risk attacks using universal adversarial perturbation (UAP), a single (input image agnostic) perturbation that can induce DNN failure in most classification tasks, are available. Thus, we focus on representative DNN models for detecting COVID-19 cases from chest X-ray images and evaluate their vulnerability to UAPs. We consider non-targeted UAPs, which cause a task failure, resulting in an input being assigned an incorrect label, and targeted UAPs, which cause the DNN to classify an input into a specific class. The results demonstrate that the models are vulnerable to non-targeted and targeted UAPs, even in the case of small UAPs. In particular, the 2% norm of the UAPs to the average norm of an image in the image dataset achieves >85% and >90% success rates for the non-targeted and targeted attacks, respectively. Owing to the non-targeted UAPs, the DNN models judge most chest X-ray images as COVID-19 cases. The targeted UAPs allow the DNN models to classify most chest X-ray images into a specified target class. The results indicate that careful consideration is required in practical applications of DNNs to COVID-19 diagnosis; in particular, they emphasize the need for strategies to address security concerns. As an example, we show that iterative fine-tuning of DNN models using UAPs improves the robustness of DNN models against UAPs.
format Online
Article
Text
id pubmed-7745979
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-77459792020-12-31 Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks Hirano, Hokuto Koga, Kazuki Takemoto, Kazuhiro PLoS One Research Article Owing the epidemic of the novel coronavirus disease 2019 (COVID-19), chest X-ray computed tomography imaging is being used for effectively screening COVID-19 patients. The development of computer-aided systems based on deep neural networks (DNNs) has become an advanced open source to rapidly and accurately detect COVID-19 cases because the need for expert radiologists, who are limited in number, forms a bottleneck for screening. However, thus far, the vulnerability of DNN-based systems has been poorly evaluated, although realistic and high-risk attacks using universal adversarial perturbation (UAP), a single (input image agnostic) perturbation that can induce DNN failure in most classification tasks, are available. Thus, we focus on representative DNN models for detecting COVID-19 cases from chest X-ray images and evaluate their vulnerability to UAPs. We consider non-targeted UAPs, which cause a task failure, resulting in an input being assigned an incorrect label, and targeted UAPs, which cause the DNN to classify an input into a specific class. The results demonstrate that the models are vulnerable to non-targeted and targeted UAPs, even in the case of small UAPs. In particular, the 2% norm of the UAPs to the average norm of an image in the image dataset achieves >85% and >90% success rates for the non-targeted and targeted attacks, respectively. Owing to the non-targeted UAPs, the DNN models judge most chest X-ray images as COVID-19 cases. The targeted UAPs allow the DNN models to classify most chest X-ray images into a specified target class. The results indicate that careful consideration is required in practical applications of DNNs to COVID-19 diagnosis; in particular, they emphasize the need for strategies to address security concerns. As an example, we show that iterative fine-tuning of DNN models using UAPs improves the robustness of DNN models against UAPs. Public Library of Science 2020-12-17 /pmc/articles/PMC7745979/ /pubmed/33332412 http://dx.doi.org/10.1371/journal.pone.0243963 Text en © 2020 Hirano et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Hirano, Hokuto
Koga, Kazuki
Takemoto, Kazuhiro
Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks
title Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks
title_full Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks
title_fullStr Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks
title_full_unstemmed Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks
title_short Vulnerability of deep neural networks for detecting COVID-19 cases from chest X-ray images to universal adversarial attacks
title_sort vulnerability of deep neural networks for detecting covid-19 cases from chest x-ray images to universal adversarial attacks
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7745979/
https://www.ncbi.nlm.nih.gov/pubmed/33332412
http://dx.doi.org/10.1371/journal.pone.0243963
work_keys_str_mv AT hiranohokuto vulnerabilityofdeepneuralnetworksfordetectingcovid19casesfromchestxrayimagestouniversaladversarialattacks
AT kogakazuki vulnerabilityofdeepneuralnetworksfordetectingcovid19casesfromchestxrayimagestouniversaladversarialattacks
AT takemotokazuhiro vulnerabilityofdeepneuralnetworksfordetectingcovid19casesfromchestxrayimagestouniversaladversarialattacks