Cargando…
A cognitive IoT-based framework for effective diagnosis of COVID-19 using multimodal data
The COVID-19 emerged at the end of 2019 and has become a global pandemic. There are many methods for COVID-19 prediction using a single modality. However, none of them predicts with 100% accuracy, as each individual exhibits varied symptoms for the disease. To decrease the rate of misdiagnosis, mult...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Elsevier Ltd.
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8260502/ https://www.ncbi.nlm.nih.gov/pubmed/34249142 http://dx.doi.org/10.1016/j.bspc.2021.102960 |
_version_ | 1783718820026777600 |
---|---|
author | Jayachitra, V.P. Nivetha, S Nivetha, R Harini, R |
author_facet | Jayachitra, V.P. Nivetha, S Nivetha, R Harini, R |
author_sort | Jayachitra, V.P. |
collection | PubMed |
description | The COVID-19 emerged at the end of 2019 and has become a global pandemic. There are many methods for COVID-19 prediction using a single modality. However, none of them predicts with 100% accuracy, as each individual exhibits varied symptoms for the disease. To decrease the rate of misdiagnosis, multiple modalities can be used for prediction. Besides, there is also a need for a self-diagnosis system to narrow down the risk of virus spread in testing centres. Therefore, we propose a robust IoT and deep learning-based multi-modal data classification method for the accurate prediction of COVID-19. Generally, highly accurate models require deep architectures. In this work, we introduce two lightweight models, namely CovParaNet for audio (cough, speech, breathing) classification and CovTinyNet for image (X-rays, CT scans) classification. These two models were identified as the best unimodal models after comparative analysis with the existing benchmark models. Finally, the obtained results of the five independently trained unimodal models are integrated by a novel dynamic multimodal Random Forest classifier. The lightweight CovParaNet and CovTinyNet models attain a maximum accuracy of 97.45% and 99.19% respectively even with a small dataset. The proposed dynamic multimodal fusion model predicts the final result with 100% accuracy, precision, and recall, and the online retraining mechanism enables it to extend its support even in a noisy environment. Furthermore, the computational complexity of all the unimodal models is minimized tremendously and the system functions effectively with 100% reliability even in the absence of any one of the input modalities during testing. |
format | Online Article Text |
id | pubmed-8260502 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Elsevier Ltd. |
record_format | MEDLINE/PubMed |
spelling | pubmed-82605022021-07-07 A cognitive IoT-based framework for effective diagnosis of COVID-19 using multimodal data Jayachitra, V.P. Nivetha, S Nivetha, R Harini, R Biomed Signal Process Control Article The COVID-19 emerged at the end of 2019 and has become a global pandemic. There are many methods for COVID-19 prediction using a single modality. However, none of them predicts with 100% accuracy, as each individual exhibits varied symptoms for the disease. To decrease the rate of misdiagnosis, multiple modalities can be used for prediction. Besides, there is also a need for a self-diagnosis system to narrow down the risk of virus spread in testing centres. Therefore, we propose a robust IoT and deep learning-based multi-modal data classification method for the accurate prediction of COVID-19. Generally, highly accurate models require deep architectures. In this work, we introduce two lightweight models, namely CovParaNet for audio (cough, speech, breathing) classification and CovTinyNet for image (X-rays, CT scans) classification. These two models were identified as the best unimodal models after comparative analysis with the existing benchmark models. Finally, the obtained results of the five independently trained unimodal models are integrated by a novel dynamic multimodal Random Forest classifier. The lightweight CovParaNet and CovTinyNet models attain a maximum accuracy of 97.45% and 99.19% respectively even with a small dataset. The proposed dynamic multimodal fusion model predicts the final result with 100% accuracy, precision, and recall, and the online retraining mechanism enables it to extend its support even in a noisy environment. Furthermore, the computational complexity of all the unimodal models is minimized tremendously and the system functions effectively with 100% reliability even in the absence of any one of the input modalities during testing. Elsevier Ltd. 2021-09 2021-07-07 /pmc/articles/PMC8260502/ /pubmed/34249142 http://dx.doi.org/10.1016/j.bspc.2021.102960 Text en © 2021 Elsevier Ltd. All rights reserved. Since January 2020 Elsevier has created a COVID-19 resource centre with free information in English and Mandarin on the novel coronavirus COVID-19. The COVID-19 resource centre is hosted on Elsevier Connect, the company's public news and information website. Elsevier hereby grants permission to make all its COVID-19-related research that is available on the COVID-19 resource centre - including this research content - immediately available in PubMed Central and other publicly funded repositories, such as the WHO COVID database with rights for unrestricted research re-use and analyses in any form or by any means with acknowledgement of the original source. These permissions are granted for free by Elsevier for as long as the COVID-19 resource centre remains active. |
spellingShingle | Article Jayachitra, V.P. Nivetha, S Nivetha, R Harini, R A cognitive IoT-based framework for effective diagnosis of COVID-19 using multimodal data |
title | A cognitive IoT-based framework for effective diagnosis of COVID-19 using multimodal data |
title_full | A cognitive IoT-based framework for effective diagnosis of COVID-19 using multimodal data |
title_fullStr | A cognitive IoT-based framework for effective diagnosis of COVID-19 using multimodal data |
title_full_unstemmed | A cognitive IoT-based framework for effective diagnosis of COVID-19 using multimodal data |
title_short | A cognitive IoT-based framework for effective diagnosis of COVID-19 using multimodal data |
title_sort | cognitive iot-based framework for effective diagnosis of covid-19 using multimodal data |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8260502/ https://www.ncbi.nlm.nih.gov/pubmed/34249142 http://dx.doi.org/10.1016/j.bspc.2021.102960 |
work_keys_str_mv | AT jayachitravp acognitiveiotbasedframeworkforeffectivediagnosisofcovid19usingmultimodaldata AT nivethas acognitiveiotbasedframeworkforeffectivediagnosisofcovid19usingmultimodaldata AT nivethar acognitiveiotbasedframeworkforeffectivediagnosisofcovid19usingmultimodaldata AT harinir acognitiveiotbasedframeworkforeffectivediagnosisofcovid19usingmultimodaldata AT jayachitravp cognitiveiotbasedframeworkforeffectivediagnosisofcovid19usingmultimodaldata AT nivethas cognitiveiotbasedframeworkforeffectivediagnosisofcovid19usingmultimodaldata AT nivethar cognitiveiotbasedframeworkforeffectivediagnosisofcovid19usingmultimodaldata AT harinir cognitiveiotbasedframeworkforeffectivediagnosisofcovid19usingmultimodaldata |