Cargando…

Adversarial attack on deep learning-based dermatoscopic image recognition systems: Risk of misdiagnosis due to undetectable image perturbations

Deep learning algorithms have shown excellent performances in the field of medical image recognition, and practical applications have been made in several medical domains. Little is known about the feasibility and impact of an undetectable adversarial attacks, which can disrupt an algorithm by modif...

Descripción completa

Detalles Bibliográficos
Autores principales: Allyn, Jérôme, Allou, Nicolas, Vidal, Charles, Renou, Amélie, Ferdynus, Cyril
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Lippincott Williams & Wilkins 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7738012/
https://www.ncbi.nlm.nih.gov/pubmed/33327315
http://dx.doi.org/10.1097/MD.0000000000023568
_version_ 1783623040038338560
author Allyn, Jérôme
Allou, Nicolas
Vidal, Charles
Renou, Amélie
Ferdynus, Cyril
author_facet Allyn, Jérôme
Allou, Nicolas
Vidal, Charles
Renou, Amélie
Ferdynus, Cyril
author_sort Allyn, Jérôme
collection PubMed
description Deep learning algorithms have shown excellent performances in the field of medical image recognition, and practical applications have been made in several medical domains. Little is known about the feasibility and impact of an undetectable adversarial attacks, which can disrupt an algorithm by modifying a single pixel of the image to be interpreted. The aim of the study was to test the feasibility and impact of an adversarial attack on the accuracy of a deep learning-based dermatoscopic image recognition system. First, the pre-trained convolutional neural network DenseNet-201 was trained to classify images from the training set into 7 categories. Second, an adversarial neural network was trained to generate undetectable perturbations on images from the test set, to classifying all perturbed images as melanocytic nevi. The perturbed images were classified using the model generated in the first step. This study used the HAM-10000 dataset, an open source image database containing 10,015 dermatoscopic images, which was split into a training set and a test set. The accuracy of the generated classification model was evaluated using images from the test set. The accuracy of the model with and without perturbed images was compared. The ability of 2 observers to detect image perturbations was evaluated, and the inter observer agreement was calculated. The overall accuracy of the classification model dropped from 84% (confidence interval (CI) 95%: 82–86) for unperturbed images to 67% (CI 95%: 65–69) for perturbed images (Mc Nemar test, P < .0001). The fooling ratio reached 100% for all categories of skin lesions. Sensitivity and specificity of the combined observers calculated on a random sample of 50 images were 58.3% (CI 95%: 45.9–70.8) and 42.5% (CI 95%: 27.2–57.8), respectively. The kappa agreement coefficient between the 2 observers was negative at -0.22 (CI 95%: −0.49–−0.04). Adversarial attacks on medical image databases can distort interpretation by image recognition algorithms, are easy to make and undetectable by humans. It seems essential to improve our understanding of deep learning-based image recognition systems and to upgrade their security before putting them to practical and daily use.
format Online
Article
Text
id pubmed-7738012
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Lippincott Williams & Wilkins
record_format MEDLINE/PubMed
spelling pubmed-77380122020-12-16 Adversarial attack on deep learning-based dermatoscopic image recognition systems: Risk of misdiagnosis due to undetectable image perturbations Allyn, Jérôme Allou, Nicolas Vidal, Charles Renou, Amélie Ferdynus, Cyril Medicine (Baltimore) 4100 Deep learning algorithms have shown excellent performances in the field of medical image recognition, and practical applications have been made in several medical domains. Little is known about the feasibility and impact of an undetectable adversarial attacks, which can disrupt an algorithm by modifying a single pixel of the image to be interpreted. The aim of the study was to test the feasibility and impact of an adversarial attack on the accuracy of a deep learning-based dermatoscopic image recognition system. First, the pre-trained convolutional neural network DenseNet-201 was trained to classify images from the training set into 7 categories. Second, an adversarial neural network was trained to generate undetectable perturbations on images from the test set, to classifying all perturbed images as melanocytic nevi. The perturbed images were classified using the model generated in the first step. This study used the HAM-10000 dataset, an open source image database containing 10,015 dermatoscopic images, which was split into a training set and a test set. The accuracy of the generated classification model was evaluated using images from the test set. The accuracy of the model with and without perturbed images was compared. The ability of 2 observers to detect image perturbations was evaluated, and the inter observer agreement was calculated. The overall accuracy of the classification model dropped from 84% (confidence interval (CI) 95%: 82–86) for unperturbed images to 67% (CI 95%: 65–69) for perturbed images (Mc Nemar test, P < .0001). The fooling ratio reached 100% for all categories of skin lesions. Sensitivity and specificity of the combined observers calculated on a random sample of 50 images were 58.3% (CI 95%: 45.9–70.8) and 42.5% (CI 95%: 27.2–57.8), respectively. The kappa agreement coefficient between the 2 observers was negative at -0.22 (CI 95%: −0.49–−0.04). Adversarial attacks on medical image databases can distort interpretation by image recognition algorithms, are easy to make and undetectable by humans. It seems essential to improve our understanding of deep learning-based image recognition systems and to upgrade their security before putting them to practical and daily use. Lippincott Williams & Wilkins 2020-12-11 /pmc/articles/PMC7738012/ /pubmed/33327315 http://dx.doi.org/10.1097/MD.0000000000023568 Text en Copyright © 2020 the Author(s). Published by Wolters Kluwer Health, Inc. http://creativecommons.org/licenses/by-nc/4.0 This is an open access article distributed under the terms of the Creative Commons Attribution-Non Commercial License 4.0 (CCBY-NC), where it is permissible to download, share, remix, transform, and buildup the work provided it is properly cited. The work cannot be used commercially without permission from the journal. http://creativecommons.org/licenses/by-nc/4.0
spellingShingle 4100
Allyn, Jérôme
Allou, Nicolas
Vidal, Charles
Renou, Amélie
Ferdynus, Cyril
Adversarial attack on deep learning-based dermatoscopic image recognition systems: Risk of misdiagnosis due to undetectable image perturbations
title Adversarial attack on deep learning-based dermatoscopic image recognition systems: Risk of misdiagnosis due to undetectable image perturbations
title_full Adversarial attack on deep learning-based dermatoscopic image recognition systems: Risk of misdiagnosis due to undetectable image perturbations
title_fullStr Adversarial attack on deep learning-based dermatoscopic image recognition systems: Risk of misdiagnosis due to undetectable image perturbations
title_full_unstemmed Adversarial attack on deep learning-based dermatoscopic image recognition systems: Risk of misdiagnosis due to undetectable image perturbations
title_short Adversarial attack on deep learning-based dermatoscopic image recognition systems: Risk of misdiagnosis due to undetectable image perturbations
title_sort adversarial attack on deep learning-based dermatoscopic image recognition systems: risk of misdiagnosis due to undetectable image perturbations
topic 4100
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7738012/
https://www.ncbi.nlm.nih.gov/pubmed/33327315
http://dx.doi.org/10.1097/MD.0000000000023568
work_keys_str_mv AT allynjerome adversarialattackondeeplearningbaseddermatoscopicimagerecognitionsystemsriskofmisdiagnosisduetoundetectableimageperturbations
AT allounicolas adversarialattackondeeplearningbaseddermatoscopicimagerecognitionsystemsriskofmisdiagnosisduetoundetectableimageperturbations
AT vidalcharles adversarialattackondeeplearningbaseddermatoscopicimagerecognitionsystemsriskofmisdiagnosisduetoundetectableimageperturbations
AT renouamelie adversarialattackondeeplearningbaseddermatoscopicimagerecognitionsystemsriskofmisdiagnosisduetoundetectableimageperturbations
AT ferdynuscyril adversarialattackondeeplearningbaseddermatoscopicimagerecognitionsystemsriskofmisdiagnosisduetoundetectableimageperturbations