Cargando…

Local extreme map guided multi-modal brain image fusion

Multi-modal brain image fusion targets on integrating the salient and complementary features of different modalities of brain images into a comprehensive image. The well-fused brain image will make it convenient for doctors to precisely examine the brain diseases and can be input to intelligent syst...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhang, Yu, Xiang, Wenhao, Zhang, Shunli, Shen, Jianjun, Wei, Ran, Bai, Xiangzhi, Zhang, Li, Zhang, Qing
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9650390/
https://www.ncbi.nlm.nih.gov/pubmed/36389249
http://dx.doi.org/10.3389/fnins.2022.1055451
_version_ 1784828006501449728
author Zhang, Yu
Xiang, Wenhao
Zhang, Shunli
Shen, Jianjun
Wei, Ran
Bai, Xiangzhi
Zhang, Li
Zhang, Qing
author_facet Zhang, Yu
Xiang, Wenhao
Zhang, Shunli
Shen, Jianjun
Wei, Ran
Bai, Xiangzhi
Zhang, Li
Zhang, Qing
author_sort Zhang, Yu
collection PubMed
description Multi-modal brain image fusion targets on integrating the salient and complementary features of different modalities of brain images into a comprehensive image. The well-fused brain image will make it convenient for doctors to precisely examine the brain diseases and can be input to intelligent systems to automatically detect the possible diseases. In order to achieve the above purpose, we have proposed a local extreme map guided multi-modal brain image fusion method. First, each source image is iteratively smoothed by the local extreme map guided image filter. Specifically, in each iteration, the guidance image is alternatively set to the local minimum map of the input image and local maximum map of previously filtered image. With the iteratively smoothed images, multiple scales of bright and dark feature maps of each source image can be gradually extracted from the difference image of every two continuously smoothed images. Then, the multiple scales of bright feature maps and base images (i.e., final-scale smoothed images) of the source images are fused by the elementwise-maximum fusion rule, respectively, and the multiple scales of dark feature maps of the source images are fused by the elementwise-minimum fusion rule. Finally, the fused bright feature map, dark feature map, and base image are integrated together to generate a single informative brain image. Extensive experiments verify that the proposed method outperforms eight state-of-the-art (SOTA) image fusion methods from both qualitative and quantitative aspects and demonstrates great application potential to clinical scenarios.
format Online
Article
Text
id pubmed-9650390
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-96503902022-11-15 Local extreme map guided multi-modal brain image fusion Zhang, Yu Xiang, Wenhao Zhang, Shunli Shen, Jianjun Wei, Ran Bai, Xiangzhi Zhang, Li Zhang, Qing Front Neurosci Neuroscience Multi-modal brain image fusion targets on integrating the salient and complementary features of different modalities of brain images into a comprehensive image. The well-fused brain image will make it convenient for doctors to precisely examine the brain diseases and can be input to intelligent systems to automatically detect the possible diseases. In order to achieve the above purpose, we have proposed a local extreme map guided multi-modal brain image fusion method. First, each source image is iteratively smoothed by the local extreme map guided image filter. Specifically, in each iteration, the guidance image is alternatively set to the local minimum map of the input image and local maximum map of previously filtered image. With the iteratively smoothed images, multiple scales of bright and dark feature maps of each source image can be gradually extracted from the difference image of every two continuously smoothed images. Then, the multiple scales of bright feature maps and base images (i.e., final-scale smoothed images) of the source images are fused by the elementwise-maximum fusion rule, respectively, and the multiple scales of dark feature maps of the source images are fused by the elementwise-minimum fusion rule. Finally, the fused bright feature map, dark feature map, and base image are integrated together to generate a single informative brain image. Extensive experiments verify that the proposed method outperforms eight state-of-the-art (SOTA) image fusion methods from both qualitative and quantitative aspects and demonstrates great application potential to clinical scenarios. Frontiers Media S.A. 2022-10-28 /pmc/articles/PMC9650390/ /pubmed/36389249 http://dx.doi.org/10.3389/fnins.2022.1055451 Text en Copyright © 2022 Zhang, Xiang, Zhang, Shen, Wei, Bai, Zhang and Zhang. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Zhang, Yu
Xiang, Wenhao
Zhang, Shunli
Shen, Jianjun
Wei, Ran
Bai, Xiangzhi
Zhang, Li
Zhang, Qing
Local extreme map guided multi-modal brain image fusion
title Local extreme map guided multi-modal brain image fusion
title_full Local extreme map guided multi-modal brain image fusion
title_fullStr Local extreme map guided multi-modal brain image fusion
title_full_unstemmed Local extreme map guided multi-modal brain image fusion
title_short Local extreme map guided multi-modal brain image fusion
title_sort local extreme map guided multi-modal brain image fusion
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9650390/
https://www.ncbi.nlm.nih.gov/pubmed/36389249
http://dx.doi.org/10.3389/fnins.2022.1055451
work_keys_str_mv AT zhangyu localextrememapguidedmultimodalbrainimagefusion
AT xiangwenhao localextrememapguidedmultimodalbrainimagefusion
AT zhangshunli localextrememapguidedmultimodalbrainimagefusion
AT shenjianjun localextrememapguidedmultimodalbrainimagefusion
AT weiran localextrememapguidedmultimodalbrainimagefusion
AT baixiangzhi localextrememapguidedmultimodalbrainimagefusion
AT zhangli localextrememapguidedmultimodalbrainimagefusion
AT zhangqing localextrememapguidedmultimodalbrainimagefusion