Cargando…

PCNN Model Guided by Saliency Mechanism for Image Fusion in Transform Domain

In heterogeneous image fusion problems, different imaging mechanisms have always existed between time-of-flight and visible light heterogeneous images which are collected by binocular acquisition systems in orchard environments. Determining how to enhance the fusion quality is key to the solution. A...

Descripción completa

Detalles Bibliográficos
Autores principales: Liu, Liqun, Huo, Jiuyuan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10007409/
https://www.ncbi.nlm.nih.gov/pubmed/36904693
http://dx.doi.org/10.3390/s23052488
_version_ 1784905514135584768
author Liu, Liqun
Huo, Jiuyuan
author_facet Liu, Liqun
Huo, Jiuyuan
author_sort Liu, Liqun
collection PubMed
description In heterogeneous image fusion problems, different imaging mechanisms have always existed between time-of-flight and visible light heterogeneous images which are collected by binocular acquisition systems in orchard environments. Determining how to enhance the fusion quality is key to the solution. A shortcoming of the pulse coupled neural network model is that parameters are limited by manual experience settings and cannot be terminated adaptively. The limitations are obvious during the ignition process, and include ignoring the impact of image changes and fluctuations on the results, pixel artifacts, area blurring, and the occurrence of unclear edges. Aiming at these problems, an image fusion method in a pulse coupled neural network transform domain guided by a saliency mechanism is proposed. A non-subsampled shearlet transform is used to decompose the accurately registered image; the time-of-flight low-frequency component, after multiple lighting segmentation using a pulse coupled neural network, is simplified to a first-order Markov situation. The significance function is defined as first-order Markov mutual information to measure the termination condition. A new momentum-driven multi-objective artificial bee colony algorithm is used to optimize the parameters of the link channel feedback term, link strength, and dynamic threshold attenuation factor. The low-frequency components of time-of-flight and color images, after multiple lighting segmentation using a pulse coupled neural network, are fused using the weighted average rule. The high-frequency components are fused using improved bilateral filters. The results show that the proposed algorithm has the best fusion effect on the time-of-flight confidence image and the corresponding visible light image collected in the natural scene, according to nine objective image evaluation indicators. It is suitable for the heterogeneous image fusion of complex orchard environments in natural landscapes.
format Online
Article
Text
id pubmed-10007409
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-100074092023-03-12 PCNN Model Guided by Saliency Mechanism for Image Fusion in Transform Domain Liu, Liqun Huo, Jiuyuan Sensors (Basel) Article In heterogeneous image fusion problems, different imaging mechanisms have always existed between time-of-flight and visible light heterogeneous images which are collected by binocular acquisition systems in orchard environments. Determining how to enhance the fusion quality is key to the solution. A shortcoming of the pulse coupled neural network model is that parameters are limited by manual experience settings and cannot be terminated adaptively. The limitations are obvious during the ignition process, and include ignoring the impact of image changes and fluctuations on the results, pixel artifacts, area blurring, and the occurrence of unclear edges. Aiming at these problems, an image fusion method in a pulse coupled neural network transform domain guided by a saliency mechanism is proposed. A non-subsampled shearlet transform is used to decompose the accurately registered image; the time-of-flight low-frequency component, after multiple lighting segmentation using a pulse coupled neural network, is simplified to a first-order Markov situation. The significance function is defined as first-order Markov mutual information to measure the termination condition. A new momentum-driven multi-objective artificial bee colony algorithm is used to optimize the parameters of the link channel feedback term, link strength, and dynamic threshold attenuation factor. The low-frequency components of time-of-flight and color images, after multiple lighting segmentation using a pulse coupled neural network, are fused using the weighted average rule. The high-frequency components are fused using improved bilateral filters. The results show that the proposed algorithm has the best fusion effect on the time-of-flight confidence image and the corresponding visible light image collected in the natural scene, according to nine objective image evaluation indicators. It is suitable for the heterogeneous image fusion of complex orchard environments in natural landscapes. MDPI 2023-02-23 /pmc/articles/PMC10007409/ /pubmed/36904693 http://dx.doi.org/10.3390/s23052488 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Liu, Liqun
Huo, Jiuyuan
PCNN Model Guided by Saliency Mechanism for Image Fusion in Transform Domain
title PCNN Model Guided by Saliency Mechanism for Image Fusion in Transform Domain
title_full PCNN Model Guided by Saliency Mechanism for Image Fusion in Transform Domain
title_fullStr PCNN Model Guided by Saliency Mechanism for Image Fusion in Transform Domain
title_full_unstemmed PCNN Model Guided by Saliency Mechanism for Image Fusion in Transform Domain
title_short PCNN Model Guided by Saliency Mechanism for Image Fusion in Transform Domain
title_sort pcnn model guided by saliency mechanism for image fusion in transform domain
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10007409/
https://www.ncbi.nlm.nih.gov/pubmed/36904693
http://dx.doi.org/10.3390/s23052488
work_keys_str_mv AT liuliqun pcnnmodelguidedbysaliencymechanismforimagefusionintransformdomain
AT huojiuyuan pcnnmodelguidedbysaliencymechanismforimagefusionintransformdomain