Cargando…
The auto segmentation for cardiac structures using a dual‐input deep learning network based on vision saliency and transformer
PURPOSE: Accurate segmentation of cardiac structures on coronary CT angiography (CCTA) images is crucial for the morphological analysis, measurement, and functional evaluation. In this study, we achieve accurate automatic segmentation of cardiac structures on CCTA image by adopting an innovative dee...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
John Wiley and Sons Inc.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9121042/ https://www.ncbi.nlm.nih.gov/pubmed/35363415 http://dx.doi.org/10.1002/acm2.13597 |
_version_ | 1784711070120673280 |
---|---|
author | Wang, Jing Wang, Shuyu Liang, Wei Zhang, Nan Zhang, Yan |
author_facet | Wang, Jing Wang, Shuyu Liang, Wei Zhang, Nan Zhang, Yan |
author_sort | Wang, Jing |
collection | PubMed |
description | PURPOSE: Accurate segmentation of cardiac structures on coronary CT angiography (CCTA) images is crucial for the morphological analysis, measurement, and functional evaluation. In this study, we achieve accurate automatic segmentation of cardiac structures on CCTA image by adopting an innovative deep learning method based on visual attention mechanism and transformer network, and its practical application value is discussed. METHODS: We developed a dual‐input deep learning network based on visual saliency and transformer (VST), which consists of self‐attention mechanism for cardiac structures segmentation. Sixty patients’ CCTA subjects were randomly selected as a development set, which were manual marked by an experienced technician. The proposed vision attention and transformer mode was trained on the patients CCTA images, with a manual contour‐derived binary mask used as the learning‐based target. We also used the deep supervision strategy by adding auxiliary losses. The loss function of our model was the sum of the Dice loss and cross‐entropy loss. To quantitatively evaluate the segmentation results, we calculated the Dice similarity coefficient (DSC) and Hausdorff distance (HD). Meanwhile, we compare the volume of automatic segmentation and manual segmentation to analyze whether there is statistical difference. RESULTS: Fivefold cross‐validation was used to benchmark the segmentation method. The results showed the left ventricular myocardium (LVM, DSC = 0.87), the left ventricular (LV, DSC = 0.94), the left atrial (LA, DSC = 0.90), the right ventricular (RV, DSC = 0.92), the right atrial (RA, DSC = 0.91), and the aortic (AO, DSC = 0.96). The average DSC was 0.92, and HD was 7.2 ± 2.1 mm. In volume comparison, except LVM and LA (p < 0.05), there was no significant statistical difference in other structures. Proposed method for structural segmentation fit well with the true profile of the cardiac substructure, and the model prediction results closed to the manual annotation. CONCLUSIONS: The adoption of the dual‐input and transformer architecture based on visual saliency has high sensitivity and specificity to cardiac structures segmentation, which can obviously improve the accuracy of automatic substructure segmentation. This is of gr |
format | Online Article Text |
id | pubmed-9121042 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | John Wiley and Sons Inc. |
record_format | MEDLINE/PubMed |
spelling | pubmed-91210422022-05-21 The auto segmentation for cardiac structures using a dual‐input deep learning network based on vision saliency and transformer Wang, Jing Wang, Shuyu Liang, Wei Zhang, Nan Zhang, Yan J Appl Clin Med Phys Medical Imaging PURPOSE: Accurate segmentation of cardiac structures on coronary CT angiography (CCTA) images is crucial for the morphological analysis, measurement, and functional evaluation. In this study, we achieve accurate automatic segmentation of cardiac structures on CCTA image by adopting an innovative deep learning method based on visual attention mechanism and transformer network, and its practical application value is discussed. METHODS: We developed a dual‐input deep learning network based on visual saliency and transformer (VST), which consists of self‐attention mechanism for cardiac structures segmentation. Sixty patients’ CCTA subjects were randomly selected as a development set, which were manual marked by an experienced technician. The proposed vision attention and transformer mode was trained on the patients CCTA images, with a manual contour‐derived binary mask used as the learning‐based target. We also used the deep supervision strategy by adding auxiliary losses. The loss function of our model was the sum of the Dice loss and cross‐entropy loss. To quantitatively evaluate the segmentation results, we calculated the Dice similarity coefficient (DSC) and Hausdorff distance (HD). Meanwhile, we compare the volume of automatic segmentation and manual segmentation to analyze whether there is statistical difference. RESULTS: Fivefold cross‐validation was used to benchmark the segmentation method. The results showed the left ventricular myocardium (LVM, DSC = 0.87), the left ventricular (LV, DSC = 0.94), the left atrial (LA, DSC = 0.90), the right ventricular (RV, DSC = 0.92), the right atrial (RA, DSC = 0.91), and the aortic (AO, DSC = 0.96). The average DSC was 0.92, and HD was 7.2 ± 2.1 mm. In volume comparison, except LVM and LA (p < 0.05), there was no significant statistical difference in other structures. Proposed method for structural segmentation fit well with the true profile of the cardiac substructure, and the model prediction results closed to the manual annotation. CONCLUSIONS: The adoption of the dual‐input and transformer architecture based on visual saliency has high sensitivity and specificity to cardiac structures segmentation, which can obviously improve the accuracy of automatic substructure segmentation. This is of gr John Wiley and Sons Inc. 2022-04-01 /pmc/articles/PMC9121042/ /pubmed/35363415 http://dx.doi.org/10.1002/acm2.13597 Text en © 2022 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, LLC on behalf of The American Association of Physicists in Medicine. https://creativecommons.org/licenses/by/4.0/This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Medical Imaging Wang, Jing Wang, Shuyu Liang, Wei Zhang, Nan Zhang, Yan The auto segmentation for cardiac structures using a dual‐input deep learning network based on vision saliency and transformer |
title | The auto segmentation for cardiac structures using a dual‐input deep learning network based on vision saliency and transformer |
title_full | The auto segmentation for cardiac structures using a dual‐input deep learning network based on vision saliency and transformer |
title_fullStr | The auto segmentation for cardiac structures using a dual‐input deep learning network based on vision saliency and transformer |
title_full_unstemmed | The auto segmentation for cardiac structures using a dual‐input deep learning network based on vision saliency and transformer |
title_short | The auto segmentation for cardiac structures using a dual‐input deep learning network based on vision saliency and transformer |
title_sort | auto segmentation for cardiac structures using a dual‐input deep learning network based on vision saliency and transformer |
topic | Medical Imaging |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9121042/ https://www.ncbi.nlm.nih.gov/pubmed/35363415 http://dx.doi.org/10.1002/acm2.13597 |
work_keys_str_mv | AT wangjing theautosegmentationforcardiacstructuresusingadualinputdeeplearningnetworkbasedonvisionsaliencyandtransformer AT wangshuyu theautosegmentationforcardiacstructuresusingadualinputdeeplearningnetworkbasedonvisionsaliencyandtransformer AT liangwei theautosegmentationforcardiacstructuresusingadualinputdeeplearningnetworkbasedonvisionsaliencyandtransformer AT zhangnan theautosegmentationforcardiacstructuresusingadualinputdeeplearningnetworkbasedonvisionsaliencyandtransformer AT zhangyan theautosegmentationforcardiacstructuresusingadualinputdeeplearningnetworkbasedonvisionsaliencyandtransformer AT wangjing autosegmentationforcardiacstructuresusingadualinputdeeplearningnetworkbasedonvisionsaliencyandtransformer AT wangshuyu autosegmentationforcardiacstructuresusingadualinputdeeplearningnetworkbasedonvisionsaliencyandtransformer AT liangwei autosegmentationforcardiacstructuresusingadualinputdeeplearningnetworkbasedonvisionsaliencyandtransformer AT zhangnan autosegmentationforcardiacstructuresusingadualinputdeeplearningnetworkbasedonvisionsaliencyandtransformer AT zhangyan autosegmentationforcardiacstructuresusingadualinputdeeplearningnetworkbasedonvisionsaliencyandtransformer |