Cargando…
On the role of deep learning model complexity in adversarial robustness for medical images
BACKGROUND: Deep learning (DL) models are highly vulnerable to adversarial attacks for medical image classification. An adversary could modify the input data in imperceptible ways such that a model could be tricked to predict, say, an image that actually exhibits malignant tumor to a prediction that...
Autores principales: | Rodriguez, David, Nayak, Tapsya, Chen, Yidong, Krishnan, Ram, Huang, Yufei |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9208111/ https://www.ncbi.nlm.nih.gov/pubmed/35725429 http://dx.doi.org/10.1186/s12911-022-01891-w |
Ejemplares similares
-
Using Adversarial Images to Assess the Robustness of Deep Learning Models Trained on Diagnostic Images in Oncology
por: Joel, Marina Z., et al.
Publicado: (2022) -
Robust adversarial uncertainty quantification for deep learning fine-tuning
por: Ahmed, Usman, et al.
Publicado: (2023) -
Adversarial Robustness of Deep Reinforcement Learning Based Dynamic Recommender Systems
por: Wang, Siyu, et al.
Publicado: (2022) -
Digital Watermarking as an Adversarial Attack on Medical Image Analysis with Deep Learning
por: Apostolidis, Kyriakos D., et al.
Publicado: (2022) -
Between-Class Adversarial Training for Improving Adversarial Robustness of Image Classification
por: Wang, Desheng, et al.
Publicado: (2023)