Cargando…
GFNet: Automatic segmentation of COVID-19 lung infection regions using CT images based on boundary features()
In early 2020, the global spread of the COVID-19 has presented the world with a serious health crisis. Due to the large number of infected patients, automatic segmentation of lung infections using computed tomography (CT) images has great potential to enhance traditional medical strategies. However,...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Elsevier Ltd.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9359771/ https://www.ncbi.nlm.nih.gov/pubmed/35966970 http://dx.doi.org/10.1016/j.patcog.2022.108963 |
Sumario: | In early 2020, the global spread of the COVID-19 has presented the world with a serious health crisis. Due to the large number of infected patients, automatic segmentation of lung infections using computed tomography (CT) images has great potential to enhance traditional medical strategies. However, the segmentation of infected regions in CT slices still faces many challenges. Specially, the most core problem is the high variability of infection characteristics and the low contrast between the infected and the normal regions. This problem leads to fuzzy regions in lung CT segmentation. To address this problem, we have designed a novel global feature network(GFNet) for COVID-19 lung infections: VGG16 as backbone, we design a Edge-guidance module(Eg) that fuses the features of each layer. First, features are extracted by reverse attention module and Eg is combined with it. This series of steps enables each layer to fully extract boundary details that are difficult to be noticed by previous models, thus solving the fuzzy problem of infected regions. The multi-layer output features are fused into the final output to finally achieve automatic and accurate segmentation of infected areas. We compared the traditional medical segmentation networks, UNet, UNet++, the latest model Inf-Net, and methods of few shot learning field. Experiments show that our model is superior to the above models in Dice, Sensitivity, Specificity and other evaluation metrics, and our segmentation results are clear and accurate from the visual effect, which proves the effectiveness of GFNet. In addition, we verify the generalization ability of GFNet on another “never seen” dataset, and the results prove that our model still has better generalization ability than the above model. Our code has been shared at https://github.com/zengzhenhuan/GFNet. |
---|