Cargando…
MMViT-Seg: A lightweight transformer and CNN fusion network for COVID-19 segmentation
Background and objective: COVID-19 is a serious threat to human health. Traditional convolutional neural networks (CNNs) can realize medical image segmentation, whilst transformers can be used to perform machine vision tasks, because they have a better ability to capture long-range relationships tha...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Elsevier B.V.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9833855/ https://www.ncbi.nlm.nih.gov/pubmed/36706618 http://dx.doi.org/10.1016/j.cmpb.2023.107348 |
_version_ | 1784868329651961856 |
---|---|
author | Yang, Yuan Zhang, Lin Ren, Lei Wang, Xiaohan |
author_facet | Yang, Yuan Zhang, Lin Ren, Lei Wang, Xiaohan |
author_sort | Yang, Yuan |
collection | PubMed |
description | Background and objective: COVID-19 is a serious threat to human health. Traditional convolutional neural networks (CNNs) can realize medical image segmentation, whilst transformers can be used to perform machine vision tasks, because they have a better ability to capture long-range relationships than CNNs. The combination of CNN and transformers to complete the task of semantic segmentation has attracted intense research. Currently, it is challenging to segment medical images on limited data sets like that on COVID-19. Methods: This study proposes a lightweight transformer+CNN model, in which the encoder sub-network is a two-path design that enables both the global dependence of image features and the low layer spatial details to be effectively captured. Using CNN and MobileViT to jointly extract image features reduces the amount of computation and complexity of the model as well as improves the segmentation performance. So this model is titled Mini-MobileViT-Seg (MMViT-Seg). In addition, a multi query attention (MQA) module is proposed to fuse the multi-scale features from different levels of decoder sub-network, further improving the performance of the model. MQA can simultaneously fuse multi-input, multi-scale low-level feature maps and high-level feature maps as well as conduct end-to-end supervised learning guided by ground truth. Results: The two-class infection labeling experiments were conducted based on three datasets. The final results show that the proposed model has the best performance and the minimum number of parameters among five popular semantic segmentation algorithms. In multi-class infection labeling results, the proposed model also achieved competitive performance. Conclusions: The proposed MMViT-Seg is tested on three COVID-19 segmentation datasets, with results showing that this model has better performance than other models. In addition, the proposed MQA module, which can effectively fuse multi-scale features of different levels further improves the segmentation accuracy. |
format | Online Article Text |
id | pubmed-9833855 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Elsevier B.V. |
record_format | MEDLINE/PubMed |
spelling | pubmed-98338552023-01-12 MMViT-Seg: A lightweight transformer and CNN fusion network for COVID-19 segmentation Yang, Yuan Zhang, Lin Ren, Lei Wang, Xiaohan Comput Methods Programs Biomed Article Background and objective: COVID-19 is a serious threat to human health. Traditional convolutional neural networks (CNNs) can realize medical image segmentation, whilst transformers can be used to perform machine vision tasks, because they have a better ability to capture long-range relationships than CNNs. The combination of CNN and transformers to complete the task of semantic segmentation has attracted intense research. Currently, it is challenging to segment medical images on limited data sets like that on COVID-19. Methods: This study proposes a lightweight transformer+CNN model, in which the encoder sub-network is a two-path design that enables both the global dependence of image features and the low layer spatial details to be effectively captured. Using CNN and MobileViT to jointly extract image features reduces the amount of computation and complexity of the model as well as improves the segmentation performance. So this model is titled Mini-MobileViT-Seg (MMViT-Seg). In addition, a multi query attention (MQA) module is proposed to fuse the multi-scale features from different levels of decoder sub-network, further improving the performance of the model. MQA can simultaneously fuse multi-input, multi-scale low-level feature maps and high-level feature maps as well as conduct end-to-end supervised learning guided by ground truth. Results: The two-class infection labeling experiments were conducted based on three datasets. The final results show that the proposed model has the best performance and the minimum number of parameters among five popular semantic segmentation algorithms. In multi-class infection labeling results, the proposed model also achieved competitive performance. Conclusions: The proposed MMViT-Seg is tested on three COVID-19 segmentation datasets, with results showing that this model has better performance than other models. In addition, the proposed MQA module, which can effectively fuse multi-scale features of different levels further improves the segmentation accuracy. Elsevier B.V. 2023-03 2023-01-12 /pmc/articles/PMC9833855/ /pubmed/36706618 http://dx.doi.org/10.1016/j.cmpb.2023.107348 Text en © 2023 Elsevier B.V. All rights reserved. Since January 2020 Elsevier has created a COVID-19 resource centre with free information in English and Mandarin on the novel coronavirus COVID-19. The COVID-19 resource centre is hosted on Elsevier Connect, the company's public news and information website. Elsevier hereby grants permission to make all its COVID-19-related research that is available on the COVID-19 resource centre - including this research content - immediately available in PubMed Central and other publicly funded repositories, such as the WHO COVID database with rights for unrestricted research re-use and analyses in any form or by any means with acknowledgement of the original source. These permissions are granted for free by Elsevier for as long as the COVID-19 resource centre remains active. |
spellingShingle | Article Yang, Yuan Zhang, Lin Ren, Lei Wang, Xiaohan MMViT-Seg: A lightweight transformer and CNN fusion network for COVID-19 segmentation |
title | MMViT-Seg: A lightweight transformer and CNN fusion network for COVID-19 segmentation |
title_full | MMViT-Seg: A lightweight transformer and CNN fusion network for COVID-19 segmentation |
title_fullStr | MMViT-Seg: A lightweight transformer and CNN fusion network for COVID-19 segmentation |
title_full_unstemmed | MMViT-Seg: A lightweight transformer and CNN fusion network for COVID-19 segmentation |
title_short | MMViT-Seg: A lightweight transformer and CNN fusion network for COVID-19 segmentation |
title_sort | mmvit-seg: a lightweight transformer and cnn fusion network for covid-19 segmentation |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9833855/ https://www.ncbi.nlm.nih.gov/pubmed/36706618 http://dx.doi.org/10.1016/j.cmpb.2023.107348 |
work_keys_str_mv | AT yangyuan mmvitsegalightweighttransformerandcnnfusionnetworkforcovid19segmentation AT zhanglin mmvitsegalightweighttransformerandcnnfusionnetworkforcovid19segmentation AT renlei mmvitsegalightweighttransformerandcnnfusionnetworkforcovid19segmentation AT wangxiaohan mmvitsegalightweighttransformerandcnnfusionnetworkforcovid19segmentation |