Cargando…

LatLRR-FCNs: Latent Low-Rank Representation With Fully Convolutional Networks for Medical Image Fusion

Medical image fusion, which aims to derive complementary information from multi-modality medical images, plays an important role in many clinical applications, such as medical diagnostics and treatment. We propose the LatLRR-FCNs, which is a hybrid medical image fusion framework consisting of the la...

Descripción completa

Detalles Bibliográficos
Autores principales: Xu, Zhengyuan, Xiang, Wentao, Zhu, Songsheng, Zeng, Rui, Marquez-Chin, Cesar, Chen, Zhen, Chen, Xianqing, Liu, Bin, Li, Jianqing
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7838502/
https://www.ncbi.nlm.nih.gov/pubmed/33519365
http://dx.doi.org/10.3389/fnins.2020.615435
_version_ 1783643193155256320
author Xu, Zhengyuan
Xiang, Wentao
Zhu, Songsheng
Zeng, Rui
Marquez-Chin, Cesar
Chen, Zhen
Chen, Xianqing
Liu, Bin
Li, Jianqing
author_facet Xu, Zhengyuan
Xiang, Wentao
Zhu, Songsheng
Zeng, Rui
Marquez-Chin, Cesar
Chen, Zhen
Chen, Xianqing
Liu, Bin
Li, Jianqing
author_sort Xu, Zhengyuan
collection PubMed
description Medical image fusion, which aims to derive complementary information from multi-modality medical images, plays an important role in many clinical applications, such as medical diagnostics and treatment. We propose the LatLRR-FCNs, which is a hybrid medical image fusion framework consisting of the latent low-rank representation (LatLRR) and the fully convolutional networks (FCNs). Specifically, the LatLRR module is used to decompose the multi-modality medical images into low-rank and saliency components, which can provide fine-grained details and preserve energies, respectively. The FCN module aims to preserve both global and local information by generating the weighting maps for each modality image. The final weighting map is obtained using the weighted local energy and the weighted sum of the eight-neighborhood-based modified Laplacian method. The fused low-rank component is generated by combining the low-rank components of each modality image according to the guidance provided by the final weighting map within pyramid-based fusion. A simple sum strategy is used for the saliency components. The usefulness and efficiency of the proposed framework are thoroughly evaluated on four medical image fusion tasks, including computed tomography (CT) and magnetic resonance (MR), T1- and T2-weighted MR, positron emission tomography and MR, and single-photon emission CT and MR. The results demonstrate that by leveraging the LatLRR for image detail extraction and the FCNs for global and local information description, we can achieve performance superior to the state-of-the-art methods in terms of both objective assessment and visual quality in some cases. Furthermore, our method has a competitive performance in terms of computational costs compared to other baselines.
format Online
Article
Text
id pubmed-7838502
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-78385022021-01-28 LatLRR-FCNs: Latent Low-Rank Representation With Fully Convolutional Networks for Medical Image Fusion Xu, Zhengyuan Xiang, Wentao Zhu, Songsheng Zeng, Rui Marquez-Chin, Cesar Chen, Zhen Chen, Xianqing Liu, Bin Li, Jianqing Front Neurosci Neuroscience Medical image fusion, which aims to derive complementary information from multi-modality medical images, plays an important role in many clinical applications, such as medical diagnostics and treatment. We propose the LatLRR-FCNs, which is a hybrid medical image fusion framework consisting of the latent low-rank representation (LatLRR) and the fully convolutional networks (FCNs). Specifically, the LatLRR module is used to decompose the multi-modality medical images into low-rank and saliency components, which can provide fine-grained details and preserve energies, respectively. The FCN module aims to preserve both global and local information by generating the weighting maps for each modality image. The final weighting map is obtained using the weighted local energy and the weighted sum of the eight-neighborhood-based modified Laplacian method. The fused low-rank component is generated by combining the low-rank components of each modality image according to the guidance provided by the final weighting map within pyramid-based fusion. A simple sum strategy is used for the saliency components. The usefulness and efficiency of the proposed framework are thoroughly evaluated on four medical image fusion tasks, including computed tomography (CT) and magnetic resonance (MR), T1- and T2-weighted MR, positron emission tomography and MR, and single-photon emission CT and MR. The results demonstrate that by leveraging the LatLRR for image detail extraction and the FCNs for global and local information description, we can achieve performance superior to the state-of-the-art methods in terms of both objective assessment and visual quality in some cases. Furthermore, our method has a competitive performance in terms of computational costs compared to other baselines. Frontiers Media S.A. 2021-01-13 /pmc/articles/PMC7838502/ /pubmed/33519365 http://dx.doi.org/10.3389/fnins.2020.615435 Text en Copyright © 2021 Xu, Xiang, Zhu, Zeng, Marquez-Chin, Chen, Chen, Liu and Li. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Xu, Zhengyuan
Xiang, Wentao
Zhu, Songsheng
Zeng, Rui
Marquez-Chin, Cesar
Chen, Zhen
Chen, Xianqing
Liu, Bin
Li, Jianqing
LatLRR-FCNs: Latent Low-Rank Representation With Fully Convolutional Networks for Medical Image Fusion
title LatLRR-FCNs: Latent Low-Rank Representation With Fully Convolutional Networks for Medical Image Fusion
title_full LatLRR-FCNs: Latent Low-Rank Representation With Fully Convolutional Networks for Medical Image Fusion
title_fullStr LatLRR-FCNs: Latent Low-Rank Representation With Fully Convolutional Networks for Medical Image Fusion
title_full_unstemmed LatLRR-FCNs: Latent Low-Rank Representation With Fully Convolutional Networks for Medical Image Fusion
title_short LatLRR-FCNs: Latent Low-Rank Representation With Fully Convolutional Networks for Medical Image Fusion
title_sort latlrr-fcns: latent low-rank representation with fully convolutional networks for medical image fusion
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7838502/
https://www.ncbi.nlm.nih.gov/pubmed/33519365
http://dx.doi.org/10.3389/fnins.2020.615435
work_keys_str_mv AT xuzhengyuan latlrrfcnslatentlowrankrepresentationwithfullyconvolutionalnetworksformedicalimagefusion
AT xiangwentao latlrrfcnslatentlowrankrepresentationwithfullyconvolutionalnetworksformedicalimagefusion
AT zhusongsheng latlrrfcnslatentlowrankrepresentationwithfullyconvolutionalnetworksformedicalimagefusion
AT zengrui latlrrfcnslatentlowrankrepresentationwithfullyconvolutionalnetworksformedicalimagefusion
AT marquezchincesar latlrrfcnslatentlowrankrepresentationwithfullyconvolutionalnetworksformedicalimagefusion
AT chenzhen latlrrfcnslatentlowrankrepresentationwithfullyconvolutionalnetworksformedicalimagefusion
AT chenxianqing latlrrfcnslatentlowrankrepresentationwithfullyconvolutionalnetworksformedicalimagefusion
AT liubin latlrrfcnslatentlowrankrepresentationwithfullyconvolutionalnetworksformedicalimagefusion
AT lijianqing latlrrfcnslatentlowrankrepresentationwithfullyconvolutionalnetworksformedicalimagefusion