Cargando…
Adversarial Examples—Security Threats to COVID-19 Deep Learning Systems in Medical IoT Devices
Medical IoT devices are rapidly becoming part of management ecosystems for pandemics such as COVID-19. Existing research shows that deep learning (DL) algorithms have been successfully used by researchers to identify COVID-19 phenomena from raw data obtained from medical IoT devices. Some examples o...
Formato: | Online Artículo Texto |
---|---|
Lenguaje: | English |
Publicado: |
IEEE
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8864959/ https://www.ncbi.nlm.nih.gov/pubmed/36811011 http://dx.doi.org/10.1109/JIOT.2020.3013710 |
_version_ | 1784655557839290368 |
---|---|
collection | PubMed |
description | Medical IoT devices are rapidly becoming part of management ecosystems for pandemics such as COVID-19. Existing research shows that deep learning (DL) algorithms have been successfully used by researchers to identify COVID-19 phenomena from raw data obtained from medical IoT devices. Some examples of IoT technology are radiological media, such as CT scanning and X-ray images, body temperature measurement using thermal cameras, safe social distancing identification using live face detection, and face mask detection from camera images. However, researchers have identified several security vulnerabilities in DL algorithms to adversarial perturbations. In this article, we have tested a number of COVID-19 diagnostic methods that rely on DL algorithms with relevant adversarial examples (AEs). Our test results show that DL models that do not consider defensive models against adversarial perturbations remain vulnerable to adversarial attacks. Finally, we present in detail the AE generation process, implementation of the attack model, and the perturbations of the existing DL-based COVID-19 diagnostic applications. We hope that this work will raise awareness of adversarial attacks and encourages others to safeguard DL models from attacks on healthcare systems. |
format | Online Article Text |
id | pubmed-8864959 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | IEEE |
record_format | MEDLINE/PubMed |
spelling | pubmed-88649592023-02-17 Adversarial Examples—Security Threats to COVID-19 Deep Learning Systems in Medical IoT Devices IEEE Internet Things J Article Medical IoT devices are rapidly becoming part of management ecosystems for pandemics such as COVID-19. Existing research shows that deep learning (DL) algorithms have been successfully used by researchers to identify COVID-19 phenomena from raw data obtained from medical IoT devices. Some examples of IoT technology are radiological media, such as CT scanning and X-ray images, body temperature measurement using thermal cameras, safe social distancing identification using live face detection, and face mask detection from camera images. However, researchers have identified several security vulnerabilities in DL algorithms to adversarial perturbations. In this article, we have tested a number of COVID-19 diagnostic methods that rely on DL algorithms with relevant adversarial examples (AEs). Our test results show that DL models that do not consider defensive models against adversarial perturbations remain vulnerable to adversarial attacks. Finally, we present in detail the AE generation process, implementation of the attack model, and the perturbations of the existing DL-based COVID-19 diagnostic applications. We hope that this work will raise awareness of adversarial attacks and encourages others to safeguard DL models from attacks on healthcare systems. IEEE 2020-08-03 /pmc/articles/PMC8864959/ /pubmed/36811011 http://dx.doi.org/10.1109/JIOT.2020.3013710 Text en This article is free to access and download, along with rights for full text and data mining, re-use and analysis. |
spellingShingle | Article Adversarial Examples—Security Threats to COVID-19 Deep Learning Systems in Medical IoT Devices |
title | Adversarial Examples—Security Threats to COVID-19 Deep Learning Systems in Medical IoT Devices |
title_full | Adversarial Examples—Security Threats to COVID-19 Deep Learning Systems in Medical IoT Devices |
title_fullStr | Adversarial Examples—Security Threats to COVID-19 Deep Learning Systems in Medical IoT Devices |
title_full_unstemmed | Adversarial Examples—Security Threats to COVID-19 Deep Learning Systems in Medical IoT Devices |
title_short | Adversarial Examples—Security Threats to COVID-19 Deep Learning Systems in Medical IoT Devices |
title_sort | adversarial examples—security threats to covid-19 deep learning systems in medical iot devices |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8864959/ https://www.ncbi.nlm.nih.gov/pubmed/36811011 http://dx.doi.org/10.1109/JIOT.2020.3013710 |
work_keys_str_mv | AT adversarialexamplessecuritythreatstocovid19deeplearningsystemsinmedicaliotdevices AT adversarialexamplessecuritythreatstocovid19deeplearningsystemsinmedicaliotdevices AT adversarialexamplessecuritythreatstocovid19deeplearningsystemsinmedicaliotdevices AT adversarialexamplessecuritythreatstocovid19deeplearningsystemsinmedicaliotdevices |