Cargando…

A lightweight CNN-based network on COVID-19 detection using X-ray and CT images

BACKGROUND AND OBJECTIVES: The traditional method of detecting COVID-19 disease mainly rely on the interpretation of computer tomography (CT) or X-ray images (X-ray) by doctors or professional researchers to identify whether it is COVID-19 disease, which is easy to cause identification mistakes. In...

Descripción completa

Detalles Bibliográficos
Autores principales: Huang, Mei-Ling, Liao, Yu-Chieh
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Published by Elsevier Ltd. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9090861/
https://www.ncbi.nlm.nih.gov/pubmed/35576824
http://dx.doi.org/10.1016/j.compbiomed.2022.105604
Descripción
Sumario:BACKGROUND AND OBJECTIVES: The traditional method of detecting COVID-19 disease mainly rely on the interpretation of computer tomography (CT) or X-ray images (X-ray) by doctors or professional researchers to identify whether it is COVID-19 disease, which is easy to cause identification mistakes. In this study, the technology of convolutional neural network is expected to be able to efficiently and accurately identify the COVID-19 disease. METHODS: This study uses and fine-tunes seven convolutional neural networks including InceptionV3, ResNet50V2, Xception, DenseNet121, MobileNetV2, EfficientNet-B0, and EfficientNetV2 on COVID-19 detection. In addition, we proposes a lightweight convolutional neural network, LightEfficientNetV2, on small number of chest X-ray and CT images. Five-fold cross-validation was used to evaluate the performance of each model. To confirm the performance of the proposed model, LightEfficientNetV2 was carried out on three different datasets (NIH Chest X-rays, SARS-CoV-2 and COVID-CT). RESULTS: On chest X-ray image dataset, the highest accuracy 96.50% was from InceptionV3 before fine-tuning; and the highest accuracy 97.73% was from EfficientNetV2 after fine-tuning. The accuracy of the LightEfficientNetV2 model proposed in this study is 98.33% on chest X-ray image. On CT images, the best transfer learning model before fine-tuning is MobileNetV2, with an accuracy of 94.46%; the best transfer learning model after fine-tuning is Xception, with an accuracy of 96.78%. The accuracy of the LightEfficientNetV2 model proposed in this study is 97.48% on CT image. CONCLUSIONS: Compared with the SOTA, LightEfficientNetV2 proposed in this study demonstrates promising performance on chest X-ray images, CT images and three different datasets.