Cargando…
Visual transformer and deep CNN prediction of high-risk COVID-19 infected patients using fusion of CT images and clinical data
BACKGROUND: Despite the globally reducing hospitalization rates and the much lower risks of Covid-19 mortality, accurate diagnosis of the infection stage and prediction of outcomes are clinically of interest. Advanced current technology can facilitate automating the process and help identifying thos...
Autores principales: | , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10656999/ https://www.ncbi.nlm.nih.gov/pubmed/37978393 http://dx.doi.org/10.1186/s12911-023-02344-8 |
_version_ | 1785148111429042176 |
---|---|
author | Tehrani, Sara Saberi Moghadam Zarvani, Maral Amiri, Paria Ghods, Zahra Raoufi, Masoomeh Safavi-Naini, Seyed Amir Ahmad Soheili, Amirali Gharib, Mohammad Abbasi, Hamid |
author_facet | Tehrani, Sara Saberi Moghadam Zarvani, Maral Amiri, Paria Ghods, Zahra Raoufi, Masoomeh Safavi-Naini, Seyed Amir Ahmad Soheili, Amirali Gharib, Mohammad Abbasi, Hamid |
author_sort | Tehrani, Sara Saberi Moghadam |
collection | PubMed |
description | BACKGROUND: Despite the globally reducing hospitalization rates and the much lower risks of Covid-19 mortality, accurate diagnosis of the infection stage and prediction of outcomes are clinically of interest. Advanced current technology can facilitate automating the process and help identifying those who are at higher risks of developing severe illness. This work explores and represents deep-learning-based schemes for predicting clinical outcomes in Covid-19 infected patients, using Visual Transformer and Convolutional Neural Networks (CNNs), fed with 3D data fusion of CT scan images and patients’ clinical data. METHODS: We report on the efficiency of Video Swin Transformers and several CNN models fed with fusion datasets and CT scans only vs. a set of conventional classifiers fed with patients’ clinical data only. A relatively large clinical dataset from 380 Covid-19 diagnosed patients was used to train/test the models. RESULTS: Results show that the 3D Video Swin Transformers fed with the fusion datasets of 64 sectional CT scans + 67 clinical labels outperformed all other approaches for predicting outcomes in Covid-19-infected patients amongst all techniques (i.e., TPR = 0.95, FPR = 0.40, F0.5 score = 0.82, AUC = 0.77, Kappa = 0.6). CONCLUSIONS: We demonstrate how the utility of our proposed novel 3D data fusion approach through concatenating CT scan images with patients’ clinical data can remarkably improve the performance of the models in predicting Covid-19 infection outcomes. SIGNIFICANCE: Findings indicate possibilities of predicting the severity of outcome using patients’ CT images and clinical data collected at the time of admission to hospital. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12911-023-02344-8. |
format | Online Article Text |
id | pubmed-10656999 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-106569992023-11-17 Visual transformer and deep CNN prediction of high-risk COVID-19 infected patients using fusion of CT images and clinical data Tehrani, Sara Saberi Moghadam Zarvani, Maral Amiri, Paria Ghods, Zahra Raoufi, Masoomeh Safavi-Naini, Seyed Amir Ahmad Soheili, Amirali Gharib, Mohammad Abbasi, Hamid BMC Med Inform Decis Mak Research BACKGROUND: Despite the globally reducing hospitalization rates and the much lower risks of Covid-19 mortality, accurate diagnosis of the infection stage and prediction of outcomes are clinically of interest. Advanced current technology can facilitate automating the process and help identifying those who are at higher risks of developing severe illness. This work explores and represents deep-learning-based schemes for predicting clinical outcomes in Covid-19 infected patients, using Visual Transformer and Convolutional Neural Networks (CNNs), fed with 3D data fusion of CT scan images and patients’ clinical data. METHODS: We report on the efficiency of Video Swin Transformers and several CNN models fed with fusion datasets and CT scans only vs. a set of conventional classifiers fed with patients’ clinical data only. A relatively large clinical dataset from 380 Covid-19 diagnosed patients was used to train/test the models. RESULTS: Results show that the 3D Video Swin Transformers fed with the fusion datasets of 64 sectional CT scans + 67 clinical labels outperformed all other approaches for predicting outcomes in Covid-19-infected patients amongst all techniques (i.e., TPR = 0.95, FPR = 0.40, F0.5 score = 0.82, AUC = 0.77, Kappa = 0.6). CONCLUSIONS: We demonstrate how the utility of our proposed novel 3D data fusion approach through concatenating CT scan images with patients’ clinical data can remarkably improve the performance of the models in predicting Covid-19 infection outcomes. SIGNIFICANCE: Findings indicate possibilities of predicting the severity of outcome using patients’ CT images and clinical data collected at the time of admission to hospital. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12911-023-02344-8. BioMed Central 2023-11-17 /pmc/articles/PMC10656999/ /pubmed/37978393 http://dx.doi.org/10.1186/s12911-023-02344-8 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data. |
spellingShingle | Research Tehrani, Sara Saberi Moghadam Zarvani, Maral Amiri, Paria Ghods, Zahra Raoufi, Masoomeh Safavi-Naini, Seyed Amir Ahmad Soheili, Amirali Gharib, Mohammad Abbasi, Hamid Visual transformer and deep CNN prediction of high-risk COVID-19 infected patients using fusion of CT images and clinical data |
title | Visual transformer and deep CNN prediction of high-risk COVID-19 infected patients using fusion of CT images and clinical data |
title_full | Visual transformer and deep CNN prediction of high-risk COVID-19 infected patients using fusion of CT images and clinical data |
title_fullStr | Visual transformer and deep CNN prediction of high-risk COVID-19 infected patients using fusion of CT images and clinical data |
title_full_unstemmed | Visual transformer and deep CNN prediction of high-risk COVID-19 infected patients using fusion of CT images and clinical data |
title_short | Visual transformer and deep CNN prediction of high-risk COVID-19 infected patients using fusion of CT images and clinical data |
title_sort | visual transformer and deep cnn prediction of high-risk covid-19 infected patients using fusion of ct images and clinical data |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10656999/ https://www.ncbi.nlm.nih.gov/pubmed/37978393 http://dx.doi.org/10.1186/s12911-023-02344-8 |
work_keys_str_mv | AT tehranisarasaberimoghadam visualtransformeranddeepcnnpredictionofhighriskcovid19infectedpatientsusingfusionofctimagesandclinicaldata AT zarvanimaral visualtransformeranddeepcnnpredictionofhighriskcovid19infectedpatientsusingfusionofctimagesandclinicaldata AT amiriparia visualtransformeranddeepcnnpredictionofhighriskcovid19infectedpatientsusingfusionofctimagesandclinicaldata AT ghodszahra visualtransformeranddeepcnnpredictionofhighriskcovid19infectedpatientsusingfusionofctimagesandclinicaldata AT raoufimasoomeh visualtransformeranddeepcnnpredictionofhighriskcovid19infectedpatientsusingfusionofctimagesandclinicaldata AT safavinainiseyedamirahmad visualtransformeranddeepcnnpredictionofhighriskcovid19infectedpatientsusingfusionofctimagesandclinicaldata AT soheiliamirali visualtransformeranddeepcnnpredictionofhighriskcovid19infectedpatientsusingfusionofctimagesandclinicaldata AT gharibmohammad visualtransformeranddeepcnnpredictionofhighriskcovid19infectedpatientsusingfusionofctimagesandclinicaldata AT abbasihamid visualtransformeranddeepcnnpredictionofhighriskcovid19infectedpatientsusingfusionofctimagesandclinicaldata |