Cargando…
Airborne Infrared and Visible Image Fusion Combined with Region Segmentation
This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identificat...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2017
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5470803/ https://www.ncbi.nlm.nih.gov/pubmed/28505137 http://dx.doi.org/10.3390/s17051127 |
_version_ | 1783243826362580992 |
---|---|
author | Zuo, Yujia Liu, Jinghong Bai, Guanbing Wang, Xuan Sun, Mingchao |
author_facet | Zuo, Yujia Liu, Jinghong Bai, Guanbing Wang, Xuan Sun, Mingchao |
author_sort | Zuo, Yujia |
collection | PubMed |
description | This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems. |
format | Online Article Text |
id | pubmed-5470803 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2017 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-54708032017-06-16 Airborne Infrared and Visible Image Fusion Combined with Region Segmentation Zuo, Yujia Liu, Jinghong Bai, Guanbing Wang, Xuan Sun, Mingchao Sensors (Basel) Article This paper proposes an infrared (IR) and visible image fusion method introducing region segmentation into the dual-tree complex wavelet transform (DTCWT) region. This method should effectively improve both the target indication and scene spectrum features of fusion images, and the target identification and tracking reliability of fusion system, on an airborne photoelectric platform. The method involves segmenting the region in an IR image by significance, and identifying the target region and the background region; then, fusing the low-frequency components in the DTCWT region according to the region segmentation result. For high-frequency components, the region weights need to be assigned by the information richness of region details to conduct fusion based on both weights and adaptive phases, and then introducing a shrinkage function to suppress noise; Finally, the fused low-frequency and high-frequency components are reconstructed to obtain the fusion image. The experimental results show that the proposed method can fully extract complementary information from the source images to obtain a fusion image with good target indication and rich information on scene details. They also give a fusion result superior to existing popular fusion methods, based on eithers subjective or objective evaluation. With good stability and high fusion accuracy, this method can meet the fusion requirements of IR-visible image fusion systems. MDPI 2017-05-15 /pmc/articles/PMC5470803/ /pubmed/28505137 http://dx.doi.org/10.3390/s17051127 Text en © 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Zuo, Yujia Liu, Jinghong Bai, Guanbing Wang, Xuan Sun, Mingchao Airborne Infrared and Visible Image Fusion Combined with Region Segmentation |
title | Airborne Infrared and Visible Image Fusion Combined with Region Segmentation |
title_full | Airborne Infrared and Visible Image Fusion Combined with Region Segmentation |
title_fullStr | Airborne Infrared and Visible Image Fusion Combined with Region Segmentation |
title_full_unstemmed | Airborne Infrared and Visible Image Fusion Combined with Region Segmentation |
title_short | Airborne Infrared and Visible Image Fusion Combined with Region Segmentation |
title_sort | airborne infrared and visible image fusion combined with region segmentation |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5470803/ https://www.ncbi.nlm.nih.gov/pubmed/28505137 http://dx.doi.org/10.3390/s17051127 |
work_keys_str_mv | AT zuoyujia airborneinfraredandvisibleimagefusioncombinedwithregionsegmentation AT liujinghong airborneinfraredandvisibleimagefusioncombinedwithregionsegmentation AT baiguanbing airborneinfraredandvisibleimagefusioncombinedwithregionsegmentation AT wangxuan airborneinfraredandvisibleimagefusioncombinedwithregionsegmentation AT sunmingchao airborneinfraredandvisibleimagefusioncombinedwithregionsegmentation |