Cargando…
Deceptive Tricks in Artificial Intelligence: Adversarial Attacks in Ophthalmology
The artificial intelligence (AI) systems used for diagnosing ophthalmic diseases have significantly progressed in recent years. The diagnosis of difficult eye conditions, such as cataracts, diabetic retinopathy, age-related macular degeneration, glaucoma, and retinopathy of prematurity, has become s...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10179065/ https://www.ncbi.nlm.nih.gov/pubmed/37176706 http://dx.doi.org/10.3390/jcm12093266 |
_version_ | 1785041010010619904 |
---|---|
author | Zbrzezny, Agnieszka M. Grzybowski, Andrzej E. |
author_facet | Zbrzezny, Agnieszka M. Grzybowski, Andrzej E. |
author_sort | Zbrzezny, Agnieszka M. |
collection | PubMed |
description | The artificial intelligence (AI) systems used for diagnosing ophthalmic diseases have significantly progressed in recent years. The diagnosis of difficult eye conditions, such as cataracts, diabetic retinopathy, age-related macular degeneration, glaucoma, and retinopathy of prematurity, has become significantly less complicated as a result of the development of AI algorithms, which are currently on par with ophthalmologists in terms of their level of effectiveness. However, in the context of building AI systems for medical applications such as identifying eye diseases, addressing the challenges of safety and trustworthiness is paramount, including the emerging threat of adversarial attacks. Research has increasingly focused on understanding and mitigating these attacks, with numerous articles discussing this topic in recent years. As a starting point for our discussion, we used the paper by Ma et al. “Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems”. A literature review was performed for this study, which included a thorough search of open-access research papers using online sources (PubMed and Google). The research provides examples of unique attack strategies for medical images. Unfortunately, unique algorithms for attacks on the various ophthalmic image types have yet to be developed. It is a task that needs to be performed. As a result, it is necessary to build algorithms that validate the computation and explain the findings of artificial intelligence models. In this article, we focus on adversarial attacks, one of the most well-known attack methods, which provide evidence (i.e., adversarial examples) of the lack of resilience of decision models that do not include provable guarantees. Adversarial attacks have the potential to provide inaccurate findings in deep learning systems and can have catastrophic effects in the healthcare industry, such as healthcare financing fraud and wrong diagnosis. |
format | Online Article Text |
id | pubmed-10179065 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-101790652023-05-13 Deceptive Tricks in Artificial Intelligence: Adversarial Attacks in Ophthalmology Zbrzezny, Agnieszka M. Grzybowski, Andrzej E. J Clin Med Review The artificial intelligence (AI) systems used for diagnosing ophthalmic diseases have significantly progressed in recent years. The diagnosis of difficult eye conditions, such as cataracts, diabetic retinopathy, age-related macular degeneration, glaucoma, and retinopathy of prematurity, has become significantly less complicated as a result of the development of AI algorithms, which are currently on par with ophthalmologists in terms of their level of effectiveness. However, in the context of building AI systems for medical applications such as identifying eye diseases, addressing the challenges of safety and trustworthiness is paramount, including the emerging threat of adversarial attacks. Research has increasingly focused on understanding and mitigating these attacks, with numerous articles discussing this topic in recent years. As a starting point for our discussion, we used the paper by Ma et al. “Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems”. A literature review was performed for this study, which included a thorough search of open-access research papers using online sources (PubMed and Google). The research provides examples of unique attack strategies for medical images. Unfortunately, unique algorithms for attacks on the various ophthalmic image types have yet to be developed. It is a task that needs to be performed. As a result, it is necessary to build algorithms that validate the computation and explain the findings of artificial intelligence models. In this article, we focus on adversarial attacks, one of the most well-known attack methods, which provide evidence (i.e., adversarial examples) of the lack of resilience of decision models that do not include provable guarantees. Adversarial attacks have the potential to provide inaccurate findings in deep learning systems and can have catastrophic effects in the healthcare industry, such as healthcare financing fraud and wrong diagnosis. MDPI 2023-05-04 /pmc/articles/PMC10179065/ /pubmed/37176706 http://dx.doi.org/10.3390/jcm12093266 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Review Zbrzezny, Agnieszka M. Grzybowski, Andrzej E. Deceptive Tricks in Artificial Intelligence: Adversarial Attacks in Ophthalmology |
title | Deceptive Tricks in Artificial Intelligence: Adversarial Attacks in Ophthalmology |
title_full | Deceptive Tricks in Artificial Intelligence: Adversarial Attacks in Ophthalmology |
title_fullStr | Deceptive Tricks in Artificial Intelligence: Adversarial Attacks in Ophthalmology |
title_full_unstemmed | Deceptive Tricks in Artificial Intelligence: Adversarial Attacks in Ophthalmology |
title_short | Deceptive Tricks in Artificial Intelligence: Adversarial Attacks in Ophthalmology |
title_sort | deceptive tricks in artificial intelligence: adversarial attacks in ophthalmology |
topic | Review |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10179065/ https://www.ncbi.nlm.nih.gov/pubmed/37176706 http://dx.doi.org/10.3390/jcm12093266 |
work_keys_str_mv | AT zbrzeznyagnieszkam deceptivetricksinartificialintelligenceadversarialattacksinophthalmology AT grzybowskiandrzeje deceptivetricksinartificialintelligenceadversarialattacksinophthalmology |