Cargando…

A Few-Shot U-Net Deep Learning Model for COVID-19 Infected Area Segmentation in CT Images

Recent studies indicate that detecting radiographic patterns on CT chest scans can yield high sensitivity and specificity for COVID-19 identification. In this paper, we scrutinize the effectiveness of deep learning models for semantic segmentation of pneumonia-infected area segmentation in CT images...

Descripción completa

Detalles Bibliográficos
Autores principales: Voulodimos, Athanasios, Protopapadakis, Eftychios, Katsamenis, Iason, Doulamis, Anastasios, Doulamis, Nikolaos
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8004971/
https://www.ncbi.nlm.nih.gov/pubmed/33810066
http://dx.doi.org/10.3390/s21062215
_version_ 1783672026121109504
author Voulodimos, Athanasios
Protopapadakis, Eftychios
Katsamenis, Iason
Doulamis, Anastasios
Doulamis, Nikolaos
author_facet Voulodimos, Athanasios
Protopapadakis, Eftychios
Katsamenis, Iason
Doulamis, Anastasios
Doulamis, Nikolaos
author_sort Voulodimos, Athanasios
collection PubMed
description Recent studies indicate that detecting radiographic patterns on CT chest scans can yield high sensitivity and specificity for COVID-19 identification. In this paper, we scrutinize the effectiveness of deep learning models for semantic segmentation of pneumonia-infected area segmentation in CT images for the detection of COVID-19. Traditional methods for CT scan segmentation exploit a supervised learning paradigm, so they (a) require large volumes of data for their training, and (b) assume fixed (static) network weights once the training procedure has been completed. Recently, to overcome these difficulties, few-shot learning (FSL) has been introduced as a general concept of network model training using a very small amount of samples. In this paper, we explore the efficacy of few-shot learning in U-Net architectures, allowing for a dynamic fine-tuning of the network weights as new few samples are being fed into the U-Net. Experimental results indicate improvement in the segmentation accuracy of identifying COVID-19 infected regions. In particular, using 4-fold cross-validation results of the different classifiers, we observed an improvement of 5.388 ± 3.046% for all test data regarding the IoU metric and a similar increment of 5.394 ± 3.015% for the F1 score. Moreover, the statistical significance of the improvement obtained using our proposed few-shot U-Net architecture compared with the traditional U-Net model was confirmed by applying the Kruskal-Wallis test (p-value = 0.026).
format Online
Article
Text
id pubmed-8004971
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-80049712021-03-29 A Few-Shot U-Net Deep Learning Model for COVID-19 Infected Area Segmentation in CT Images Voulodimos, Athanasios Protopapadakis, Eftychios Katsamenis, Iason Doulamis, Anastasios Doulamis, Nikolaos Sensors (Basel) Article Recent studies indicate that detecting radiographic patterns on CT chest scans can yield high sensitivity and specificity for COVID-19 identification. In this paper, we scrutinize the effectiveness of deep learning models for semantic segmentation of pneumonia-infected area segmentation in CT images for the detection of COVID-19. Traditional methods for CT scan segmentation exploit a supervised learning paradigm, so they (a) require large volumes of data for their training, and (b) assume fixed (static) network weights once the training procedure has been completed. Recently, to overcome these difficulties, few-shot learning (FSL) has been introduced as a general concept of network model training using a very small amount of samples. In this paper, we explore the efficacy of few-shot learning in U-Net architectures, allowing for a dynamic fine-tuning of the network weights as new few samples are being fed into the U-Net. Experimental results indicate improvement in the segmentation accuracy of identifying COVID-19 infected regions. In particular, using 4-fold cross-validation results of the different classifiers, we observed an improvement of 5.388 ± 3.046% for all test data regarding the IoU metric and a similar increment of 5.394 ± 3.015% for the F1 score. Moreover, the statistical significance of the improvement obtained using our proposed few-shot U-Net architecture compared with the traditional U-Net model was confirmed by applying the Kruskal-Wallis test (p-value = 0.026). MDPI 2021-03-22 /pmc/articles/PMC8004971/ /pubmed/33810066 http://dx.doi.org/10.3390/s21062215 Text en © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Voulodimos, Athanasios
Protopapadakis, Eftychios
Katsamenis, Iason
Doulamis, Anastasios
Doulamis, Nikolaos
A Few-Shot U-Net Deep Learning Model for COVID-19 Infected Area Segmentation in CT Images
title A Few-Shot U-Net Deep Learning Model for COVID-19 Infected Area Segmentation in CT Images
title_full A Few-Shot U-Net Deep Learning Model for COVID-19 Infected Area Segmentation in CT Images
title_fullStr A Few-Shot U-Net Deep Learning Model for COVID-19 Infected Area Segmentation in CT Images
title_full_unstemmed A Few-Shot U-Net Deep Learning Model for COVID-19 Infected Area Segmentation in CT Images
title_short A Few-Shot U-Net Deep Learning Model for COVID-19 Infected Area Segmentation in CT Images
title_sort few-shot u-net deep learning model for covid-19 infected area segmentation in ct images
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8004971/
https://www.ncbi.nlm.nih.gov/pubmed/33810066
http://dx.doi.org/10.3390/s21062215
work_keys_str_mv AT voulodimosathanasios afewshotunetdeeplearningmodelforcovid19infectedareasegmentationinctimages
AT protopapadakiseftychios afewshotunetdeeplearningmodelforcovid19infectedareasegmentationinctimages
AT katsamenisiason afewshotunetdeeplearningmodelforcovid19infectedareasegmentationinctimages
AT doulamisanastasios afewshotunetdeeplearningmodelforcovid19infectedareasegmentationinctimages
AT doulamisnikolaos afewshotunetdeeplearningmodelforcovid19infectedareasegmentationinctimages
AT voulodimosathanasios fewshotunetdeeplearningmodelforcovid19infectedareasegmentationinctimages
AT protopapadakiseftychios fewshotunetdeeplearningmodelforcovid19infectedareasegmentationinctimages
AT katsamenisiason fewshotunetdeeplearningmodelforcovid19infectedareasegmentationinctimages
AT doulamisanastasios fewshotunetdeeplearningmodelforcovid19infectedareasegmentationinctimages
AT doulamisnikolaos fewshotunetdeeplearningmodelforcovid19infectedareasegmentationinctimages