Cargando…

Deep Learning Approach for Fusion of Magnetic Resonance Imaging-Positron Emission Tomography Image Based on Extract Image Features using Pretrained Network (VGG19)

BACKGROUND: The fusion of images is an interesting way to display the information of some different images in one image together. In this paper, we present a deep learning network approach for fusion of magnetic resonance imaging (MRI) and positron emission tomography (PET) images. METHODS: We fused...

Descripción completa

Detalles Bibliográficos
Autores principales: Amini, Nasrin, Mostaar, Ahmad
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Wolters Kluwer - Medknow 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8804594/
https://www.ncbi.nlm.nih.gov/pubmed/35265462
http://dx.doi.org/10.4103/jmss.JMSS_80_20
_version_ 1784643109693423616
author Amini, Nasrin
Mostaar, Ahmad
author_facet Amini, Nasrin
Mostaar, Ahmad
author_sort Amini, Nasrin
collection PubMed
description BACKGROUND: The fusion of images is an interesting way to display the information of some different images in one image together. In this paper, we present a deep learning network approach for fusion of magnetic resonance imaging (MRI) and positron emission tomography (PET) images. METHODS: We fused two MRI and PET images automatically with a pretrained convolutional neural network (CNN, VGG19). First, the PET image was converted from red-green-blue space to hue-saturation-intensity space to save the hue and saturation information. We started with extracting features from images by using a pretrained CNN. Then, we used the weights extracted from two MRI and PET images to construct a fused image. Fused image was constructed with multiplied weights to images. For solving the problem of reduced contrast, we added the constant coefficient of the original image to the final result. Finally, quantitative criteria (entropy, mutual information, discrepancy, and overall performance [OP]) were applied to evaluate the results of fusion. We compared the results of our method with the most widely used methods in the spatial and transform domain. RESULTS: The quantitative measurement values we used were entropy, mutual information, discrepancy, and OP that were 3.0319, 2.3993, 3.8187, and 0.9899, respectively. The final results showed that our method based on quantitative assessments was the best and easiest way to fused images, especially in the spatial domain. CONCLUSION: It concluded that our method used for MRI-PET image fusion was more accurate.
format Online
Article
Text
id pubmed-8804594
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Wolters Kluwer - Medknow
record_format MEDLINE/PubMed
spelling pubmed-88045942022-03-08 Deep Learning Approach for Fusion of Magnetic Resonance Imaging-Positron Emission Tomography Image Based on Extract Image Features using Pretrained Network (VGG19) Amini, Nasrin Mostaar, Ahmad J Med Signals Sens Original Article BACKGROUND: The fusion of images is an interesting way to display the information of some different images in one image together. In this paper, we present a deep learning network approach for fusion of magnetic resonance imaging (MRI) and positron emission tomography (PET) images. METHODS: We fused two MRI and PET images automatically with a pretrained convolutional neural network (CNN, VGG19). First, the PET image was converted from red-green-blue space to hue-saturation-intensity space to save the hue and saturation information. We started with extracting features from images by using a pretrained CNN. Then, we used the weights extracted from two MRI and PET images to construct a fused image. Fused image was constructed with multiplied weights to images. For solving the problem of reduced contrast, we added the constant coefficient of the original image to the final result. Finally, quantitative criteria (entropy, mutual information, discrepancy, and overall performance [OP]) were applied to evaluate the results of fusion. We compared the results of our method with the most widely used methods in the spatial and transform domain. RESULTS: The quantitative measurement values we used were entropy, mutual information, discrepancy, and OP that were 3.0319, 2.3993, 3.8187, and 0.9899, respectively. The final results showed that our method based on quantitative assessments was the best and easiest way to fused images, especially in the spatial domain. CONCLUSION: It concluded that our method used for MRI-PET image fusion was more accurate. Wolters Kluwer - Medknow 2021-12-28 /pmc/articles/PMC8804594/ /pubmed/35265462 http://dx.doi.org/10.4103/jmss.JMSS_80_20 Text en Copyright: © 2021 Journal of Medical Signals & Sensors https://creativecommons.org/licenses/by-nc-sa/4.0/This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms.
spellingShingle Original Article
Amini, Nasrin
Mostaar, Ahmad
Deep Learning Approach for Fusion of Magnetic Resonance Imaging-Positron Emission Tomography Image Based on Extract Image Features using Pretrained Network (VGG19)
title Deep Learning Approach for Fusion of Magnetic Resonance Imaging-Positron Emission Tomography Image Based on Extract Image Features using Pretrained Network (VGG19)
title_full Deep Learning Approach for Fusion of Magnetic Resonance Imaging-Positron Emission Tomography Image Based on Extract Image Features using Pretrained Network (VGG19)
title_fullStr Deep Learning Approach for Fusion of Magnetic Resonance Imaging-Positron Emission Tomography Image Based on Extract Image Features using Pretrained Network (VGG19)
title_full_unstemmed Deep Learning Approach for Fusion of Magnetic Resonance Imaging-Positron Emission Tomography Image Based on Extract Image Features using Pretrained Network (VGG19)
title_short Deep Learning Approach for Fusion of Magnetic Resonance Imaging-Positron Emission Tomography Image Based on Extract Image Features using Pretrained Network (VGG19)
title_sort deep learning approach for fusion of magnetic resonance imaging-positron emission tomography image based on extract image features using pretrained network (vgg19)
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8804594/
https://www.ncbi.nlm.nih.gov/pubmed/35265462
http://dx.doi.org/10.4103/jmss.JMSS_80_20
work_keys_str_mv AT amininasrin deeplearningapproachforfusionofmagneticresonanceimagingpositronemissiontomographyimagebasedonextractimagefeaturesusingpretrainednetworkvgg19
AT mostaarahmad deeplearningapproachforfusionofmagneticresonanceimagingpositronemissiontomographyimagebasedonextractimagefeaturesusingpretrainednetworkvgg19