Cargando…
Medical image fusion quality assessment based on conditional generative adversarial network
Multimodal medical image fusion (MMIF) has been proven to effectively improve the efficiency of disease diagnosis and treatment. However, few works have explored dedicated evaluation methods for MMIF. This paper proposes a novel quality assessment method for MMIF based on the conditional generative...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9400712/ https://www.ncbi.nlm.nih.gov/pubmed/36033610 http://dx.doi.org/10.3389/fnins.2022.986153 |
_version_ | 1784772801503166464 |
---|---|
author | Tang, Lu Hui, Yu Yang, Hang Zhao, Yinghong Tian, Chuangeng |
author_facet | Tang, Lu Hui, Yu Yang, Hang Zhao, Yinghong Tian, Chuangeng |
author_sort | Tang, Lu |
collection | PubMed |
description | Multimodal medical image fusion (MMIF) has been proven to effectively improve the efficiency of disease diagnosis and treatment. However, few works have explored dedicated evaluation methods for MMIF. This paper proposes a novel quality assessment method for MMIF based on the conditional generative adversarial networks. First, with the mean opinion scores (MOS) as the guiding condition, the feature information of the two source images is extracted separately through the dual channel encoder-decoder. The features of different levels in the encoder-decoder are hierarchically input into the self-attention feature block, which is a fusion strategy for self-identifying favorable features. Then, the discriminator is used to improve the fusion objective of the generator. Finally, we calculate the structural similarity index between the fake image and the true image, and the MOS corresponding to the maximum result will be used as the final assessment result of the fused image quality. Based on the established MMIF database, the proposed method achieves the state-of-the-art performance among the comparison methods, with excellent agreement with subjective evaluations, indicating that the method is effective in the quality assessment of medical fusion images. |
format | Online Article Text |
id | pubmed-9400712 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-94007122022-08-25 Medical image fusion quality assessment based on conditional generative adversarial network Tang, Lu Hui, Yu Yang, Hang Zhao, Yinghong Tian, Chuangeng Front Neurosci Neuroscience Multimodal medical image fusion (MMIF) has been proven to effectively improve the efficiency of disease diagnosis and treatment. However, few works have explored dedicated evaluation methods for MMIF. This paper proposes a novel quality assessment method for MMIF based on the conditional generative adversarial networks. First, with the mean opinion scores (MOS) as the guiding condition, the feature information of the two source images is extracted separately through the dual channel encoder-decoder. The features of different levels in the encoder-decoder are hierarchically input into the self-attention feature block, which is a fusion strategy for self-identifying favorable features. Then, the discriminator is used to improve the fusion objective of the generator. Finally, we calculate the structural similarity index between the fake image and the true image, and the MOS corresponding to the maximum result will be used as the final assessment result of the fused image quality. Based on the established MMIF database, the proposed method achieves the state-of-the-art performance among the comparison methods, with excellent agreement with subjective evaluations, indicating that the method is effective in the quality assessment of medical fusion images. Frontiers Media S.A. 2022-08-09 /pmc/articles/PMC9400712/ /pubmed/36033610 http://dx.doi.org/10.3389/fnins.2022.986153 Text en Copyright © 2022 Tang, Hui, Yang, Zhao and Tian. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Tang, Lu Hui, Yu Yang, Hang Zhao, Yinghong Tian, Chuangeng Medical image fusion quality assessment based on conditional generative adversarial network |
title | Medical image fusion quality assessment based on conditional generative adversarial network |
title_full | Medical image fusion quality assessment based on conditional generative adversarial network |
title_fullStr | Medical image fusion quality assessment based on conditional generative adversarial network |
title_full_unstemmed | Medical image fusion quality assessment based on conditional generative adversarial network |
title_short | Medical image fusion quality assessment based on conditional generative adversarial network |
title_sort | medical image fusion quality assessment based on conditional generative adversarial network |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9400712/ https://www.ncbi.nlm.nih.gov/pubmed/36033610 http://dx.doi.org/10.3389/fnins.2022.986153 |
work_keys_str_mv | AT tanglu medicalimagefusionqualityassessmentbasedonconditionalgenerativeadversarialnetwork AT huiyu medicalimagefusionqualityassessmentbasedonconditionalgenerativeadversarialnetwork AT yanghang medicalimagefusionqualityassessmentbasedonconditionalgenerativeadversarialnetwork AT zhaoyinghong medicalimagefusionqualityassessmentbasedonconditionalgenerativeadversarialnetwork AT tianchuangeng medicalimagefusionqualityassessmentbasedonconditionalgenerativeadversarialnetwork |