Cargando…
Multi-Modal Medical Image Fusion Based on FusionNet in YIQ Color Space
In order to obtain the physiological information and key features of source images to the maximum extent, improve the visual effect and clarity of the fused image, and reduce the computation, a multi-modal medical image fusion framework based on feature reuse is proposed. The framework consists of i...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7766984/ https://www.ncbi.nlm.nih.gov/pubmed/33348893 http://dx.doi.org/10.3390/e22121423 |
_version_ | 1783628849542594560 |
---|---|
author | Guo, Kai Li, Xiongfei Zang, Hongrui Fan, Tiehu |
author_facet | Guo, Kai Li, Xiongfei Zang, Hongrui Fan, Tiehu |
author_sort | Guo, Kai |
collection | PubMed |
description | In order to obtain the physiological information and key features of source images to the maximum extent, improve the visual effect and clarity of the fused image, and reduce the computation, a multi-modal medical image fusion framework based on feature reuse is proposed. The framework consists of intuitive fuzzy processing (IFP), capture image details network (CIDN), fusion, and decoding. First, the membership function of the image is redefined to remove redundant features and obtain the image with complete features. Then, inspired by DenseNet, we proposed a new encoder to capture all the medical information features in the source image. In the fusion layer, we calculate the weight of each feature graph in the required fusion coefficient according to the trajectory of the feature graph. Finally, the filtered medical information is spliced and decoded to reproduce the required fusion image. In the encoding and image reconstruction networks, the mixed loss function of cross entropy and structural similarity is adopted to greatly reduce the information loss in image fusion. To assess performance, we conducted three sets of experiments on medical images of different grayscales and colors. Experimental results show that the proposed algorithm has advantages not only in detail and structure recognition but also in visual features and time complexity compared with other algorithms. |
format | Online Article Text |
id | pubmed-7766984 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-77669842021-02-24 Multi-Modal Medical Image Fusion Based on FusionNet in YIQ Color Space Guo, Kai Li, Xiongfei Zang, Hongrui Fan, Tiehu Entropy (Basel) Article In order to obtain the physiological information and key features of source images to the maximum extent, improve the visual effect and clarity of the fused image, and reduce the computation, a multi-modal medical image fusion framework based on feature reuse is proposed. The framework consists of intuitive fuzzy processing (IFP), capture image details network (CIDN), fusion, and decoding. First, the membership function of the image is redefined to remove redundant features and obtain the image with complete features. Then, inspired by DenseNet, we proposed a new encoder to capture all the medical information features in the source image. In the fusion layer, we calculate the weight of each feature graph in the required fusion coefficient according to the trajectory of the feature graph. Finally, the filtered medical information is spliced and decoded to reproduce the required fusion image. In the encoding and image reconstruction networks, the mixed loss function of cross entropy and structural similarity is adopted to greatly reduce the information loss in image fusion. To assess performance, we conducted three sets of experiments on medical images of different grayscales and colors. Experimental results show that the proposed algorithm has advantages not only in detail and structure recognition but also in visual features and time complexity compared with other algorithms. MDPI 2020-12-17 /pmc/articles/PMC7766984/ /pubmed/33348893 http://dx.doi.org/10.3390/e22121423 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Guo, Kai Li, Xiongfei Zang, Hongrui Fan, Tiehu Multi-Modal Medical Image Fusion Based on FusionNet in YIQ Color Space |
title | Multi-Modal Medical Image Fusion Based on FusionNet in YIQ Color Space |
title_full | Multi-Modal Medical Image Fusion Based on FusionNet in YIQ Color Space |
title_fullStr | Multi-Modal Medical Image Fusion Based on FusionNet in YIQ Color Space |
title_full_unstemmed | Multi-Modal Medical Image Fusion Based on FusionNet in YIQ Color Space |
title_short | Multi-Modal Medical Image Fusion Based on FusionNet in YIQ Color Space |
title_sort | multi-modal medical image fusion based on fusionnet in yiq color space |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7766984/ https://www.ncbi.nlm.nih.gov/pubmed/33348893 http://dx.doi.org/10.3390/e22121423 |
work_keys_str_mv | AT guokai multimodalmedicalimagefusionbasedonfusionnetinyiqcolorspace AT lixiongfei multimodalmedicalimagefusionbasedonfusionnetinyiqcolorspace AT zanghongrui multimodalmedicalimagefusionbasedonfusionnetinyiqcolorspace AT fantiehu multimodalmedicalimagefusionbasedonfusionnetinyiqcolorspace |