Cargando…
Fully feature fusion based neural network for COVID-19 lesion segmentation in CT images()
Coronavirus Disease 2019 (COVID-19) spreads around the world, seriously affecting people’s health. Computed tomography (CT) images contain rich semantic information as an auxiliary diagnosis method. However, the automatic segmentation of COVID-19 lesions in CT images faces several challenges, includ...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
The Author(s). Published by Elsevier Ltd.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10083211/ https://www.ncbi.nlm.nih.gov/pubmed/37082352 http://dx.doi.org/10.1016/j.bspc.2023.104939 |
Sumario: | Coronavirus Disease 2019 (COVID-19) spreads around the world, seriously affecting people’s health. Computed tomography (CT) images contain rich semantic information as an auxiliary diagnosis method. However, the automatic segmentation of COVID-19 lesions in CT images faces several challenges, including inconsistency in size and shape of the lesion, the high variability of the lesion, and the low contrast of pixel values between the lesion and normal tissue surrounding the lesion. Therefore, this paper proposes a Fully Feature Fusion Based Neural Network for COVID-19 Lesion Segmentation in CT Images (F3-Net). F3-Net uses an encoder–decoder architecture. In F3-Net, the Multiple Scale Module (MSM) can sense features of different scales, and Dense Path Module (DPM) is used to eliminate the semantic gap between features. The Attention Fusion Module (AFM) is the attention module, which can better fuse the multiple features. Furthermore, we proposed an improved loss function [Formula: see text] that pays more attention to the lesions based on the prior knowledge of the distribution of COVID-19 lesions in the lungs. Finally, we verified the superior performance of F3-Net on a COVID-19 segmentation dataset, experiments demonstrate that the proposed model can segment COVID-19 lesions more accurately in CT images than benchmarks of state of the art. |
---|