Cargando…

Adversarial attacks and adversarial robustness in computational pathology

Artificial Intelligence (AI) can support diagnostic workflows in oncology by aiding diagnosis and providing biomarkers directly from routine pathology slides. However, AI applications are vulnerable to adversarial attacks. Hence, it is essential to quantify and mitigate this risk before widespread c...

Descripción completa

Detalles Bibliográficos
Autores principales: Ghaffari Laleh, Narmin, Truhn, Daniel, Veldhuizen, Gregory Patrick, Han, Tianyu, van Treeck, Marko, Buelow, Roman D., Langer, Rupert, Dislich, Bastian, Boor, Peter, Schulz, Volkmar, Kather, Jakob Nikolas
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9522657/
https://www.ncbi.nlm.nih.gov/pubmed/36175413
http://dx.doi.org/10.1038/s41467-022-33266-0
_version_ 1784800107380604928
author Ghaffari Laleh, Narmin
Truhn, Daniel
Veldhuizen, Gregory Patrick
Han, Tianyu
van Treeck, Marko
Buelow, Roman D.
Langer, Rupert
Dislich, Bastian
Boor, Peter
Schulz, Volkmar
Kather, Jakob Nikolas
author_facet Ghaffari Laleh, Narmin
Truhn, Daniel
Veldhuizen, Gregory Patrick
Han, Tianyu
van Treeck, Marko
Buelow, Roman D.
Langer, Rupert
Dislich, Bastian
Boor, Peter
Schulz, Volkmar
Kather, Jakob Nikolas
author_sort Ghaffari Laleh, Narmin
collection PubMed
description Artificial Intelligence (AI) can support diagnostic workflows in oncology by aiding diagnosis and providing biomarkers directly from routine pathology slides. However, AI applications are vulnerable to adversarial attacks. Hence, it is essential to quantify and mitigate this risk before widespread clinical use. Here, we show that convolutional neural networks (CNNs) are highly susceptible to white- and black-box adversarial attacks in clinically relevant weakly-supervised classification tasks. Adversarially robust training and dual batch normalization (DBN) are possible mitigation strategies but require precise knowledge of the type of attack used in the inference. We demonstrate that vision transformers (ViTs) perform equally well compared to CNNs at baseline, but are orders of magnitude more robust to white- and black-box attacks. At a mechanistic level, we show that this is associated with a more robust latent representation of clinically relevant categories in ViTs compared to CNNs. Our results are in line with previous theoretical studies and provide empirical evidence that ViTs are robust learners in computational pathology. This implies that large-scale rollout of AI models in computational pathology should rely on ViTs rather than CNN-based classifiers to provide inherent protection against perturbation of the input data, especially adversarial attacks.
format Online
Article
Text
id pubmed-9522657
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-95226572022-10-01 Adversarial attacks and adversarial robustness in computational pathology Ghaffari Laleh, Narmin Truhn, Daniel Veldhuizen, Gregory Patrick Han, Tianyu van Treeck, Marko Buelow, Roman D. Langer, Rupert Dislich, Bastian Boor, Peter Schulz, Volkmar Kather, Jakob Nikolas Nat Commun Article Artificial Intelligence (AI) can support diagnostic workflows in oncology by aiding diagnosis and providing biomarkers directly from routine pathology slides. However, AI applications are vulnerable to adversarial attacks. Hence, it is essential to quantify and mitigate this risk before widespread clinical use. Here, we show that convolutional neural networks (CNNs) are highly susceptible to white- and black-box adversarial attacks in clinically relevant weakly-supervised classification tasks. Adversarially robust training and dual batch normalization (DBN) are possible mitigation strategies but require precise knowledge of the type of attack used in the inference. We demonstrate that vision transformers (ViTs) perform equally well compared to CNNs at baseline, but are orders of magnitude more robust to white- and black-box attacks. At a mechanistic level, we show that this is associated with a more robust latent representation of clinically relevant categories in ViTs compared to CNNs. Our results are in line with previous theoretical studies and provide empirical evidence that ViTs are robust learners in computational pathology. This implies that large-scale rollout of AI models in computational pathology should rely on ViTs rather than CNN-based classifiers to provide inherent protection against perturbation of the input data, especially adversarial attacks. Nature Publishing Group UK 2022-09-29 /pmc/articles/PMC9522657/ /pubmed/36175413 http://dx.doi.org/10.1038/s41467-022-33266-0 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Ghaffari Laleh, Narmin
Truhn, Daniel
Veldhuizen, Gregory Patrick
Han, Tianyu
van Treeck, Marko
Buelow, Roman D.
Langer, Rupert
Dislich, Bastian
Boor, Peter
Schulz, Volkmar
Kather, Jakob Nikolas
Adversarial attacks and adversarial robustness in computational pathology
title Adversarial attacks and adversarial robustness in computational pathology
title_full Adversarial attacks and adversarial robustness in computational pathology
title_fullStr Adversarial attacks and adversarial robustness in computational pathology
title_full_unstemmed Adversarial attacks and adversarial robustness in computational pathology
title_short Adversarial attacks and adversarial robustness in computational pathology
title_sort adversarial attacks and adversarial robustness in computational pathology
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9522657/
https://www.ncbi.nlm.nih.gov/pubmed/36175413
http://dx.doi.org/10.1038/s41467-022-33266-0
work_keys_str_mv AT ghaffarilalehnarmin adversarialattacksandadversarialrobustnessincomputationalpathology
AT truhndaniel adversarialattacksandadversarialrobustnessincomputationalpathology
AT veldhuizengregorypatrick adversarialattacksandadversarialrobustnessincomputationalpathology
AT hantianyu adversarialattacksandadversarialrobustnessincomputationalpathology
AT vantreeckmarko adversarialattacksandadversarialrobustnessincomputationalpathology
AT buelowromand adversarialattacksandadversarialrobustnessincomputationalpathology
AT langerrupert adversarialattacksandadversarialrobustnessincomputationalpathology
AT dislichbastian adversarialattacksandadversarialrobustnessincomputationalpathology
AT boorpeter adversarialattacksandadversarialrobustnessincomputationalpathology
AT schulzvolkmar adversarialattacksandadversarialrobustnessincomputationalpathology
AT katherjakobnikolas adversarialattacksandadversarialrobustnessincomputationalpathology