Cargando…

A Learning-Enhanced Two-Pair Spatiotemporal Reflectance Fusion Model for GF-2 and GF-1 WFV Satellite Data

Since requirements of related applications for time series remotely-sensed images with high spatial resolution have been hard to be satisfied under current observation conditions of satellite sensors, it is key to reconstruct high-resolution images at specified dates. As an effective data reconstruc...

Descripción completa

Detalles Bibliográficos
Autores principales: Ge, Yanqin, Li, Yanrong, Chen, Jinyong, Sun, Kang, Li, Dacheng, Han, Qijin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7146212/
https://www.ncbi.nlm.nih.gov/pubmed/32213863
http://dx.doi.org/10.3390/s20061789
_version_ 1783520148722810880
author Ge, Yanqin
Li, Yanrong
Chen, Jinyong
Sun, Kang
Li, Dacheng
Han, Qijin
author_facet Ge, Yanqin
Li, Yanrong
Chen, Jinyong
Sun, Kang
Li, Dacheng
Han, Qijin
author_sort Ge, Yanqin
collection PubMed
description Since requirements of related applications for time series remotely-sensed images with high spatial resolution have been hard to be satisfied under current observation conditions of satellite sensors, it is key to reconstruct high-resolution images at specified dates. As an effective data reconstruction technique, spatiotemporal fusion can be used to generate time series land surface parameters with a clear geophysical significance. In this study, an improved fusion model based on the Sparse Representation-Based Spatiotemporal Reflectance Fusion Model (SPSTFM) is developed and assessed with reflectance data from Gaofen-2 Multi-Spectral (GF-2 MS) and Gaofen-1 Wide-Field-View (GF-1 WFV). By introducing a spatially enhanced training method to dictionary training and sparse coding processes, the developed fusion framework is expected to promote the description of high-resolution and low-resolution overcomplete dictionaries. Assessment indices including Average Absolute Deviation (AAD), Root-Mean-Square Error (RMSE), Peak Signal to Noise Ratio (PSNR), Correlation Coefficient (CC), spectral angle mapper (SAM), structure similarity (SSIM) and Erreur Relative Global Adimensionnelle de Synthèse (ERGAS) are then used to test employed fusion methods for a parallel comparison. The experimental results show that more accurate prediction of GF-2 MS reflectance than that from the SPSTFM can be obtained and furthermore comparable with popular two-pair based reflectance fusion models like the Spatial and Temporal Adaptive Fusion Model (STARFM) and the Enhanced-STARFM (ESTARFM).
format Online
Article
Text
id pubmed-7146212
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-71462122020-04-15 A Learning-Enhanced Two-Pair Spatiotemporal Reflectance Fusion Model for GF-2 and GF-1 WFV Satellite Data Ge, Yanqin Li, Yanrong Chen, Jinyong Sun, Kang Li, Dacheng Han, Qijin Sensors (Basel) Article Since requirements of related applications for time series remotely-sensed images with high spatial resolution have been hard to be satisfied under current observation conditions of satellite sensors, it is key to reconstruct high-resolution images at specified dates. As an effective data reconstruction technique, spatiotemporal fusion can be used to generate time series land surface parameters with a clear geophysical significance. In this study, an improved fusion model based on the Sparse Representation-Based Spatiotemporal Reflectance Fusion Model (SPSTFM) is developed and assessed with reflectance data from Gaofen-2 Multi-Spectral (GF-2 MS) and Gaofen-1 Wide-Field-View (GF-1 WFV). By introducing a spatially enhanced training method to dictionary training and sparse coding processes, the developed fusion framework is expected to promote the description of high-resolution and low-resolution overcomplete dictionaries. Assessment indices including Average Absolute Deviation (AAD), Root-Mean-Square Error (RMSE), Peak Signal to Noise Ratio (PSNR), Correlation Coefficient (CC), spectral angle mapper (SAM), structure similarity (SSIM) and Erreur Relative Global Adimensionnelle de Synthèse (ERGAS) are then used to test employed fusion methods for a parallel comparison. The experimental results show that more accurate prediction of GF-2 MS reflectance than that from the SPSTFM can be obtained and furthermore comparable with popular two-pair based reflectance fusion models like the Spatial and Temporal Adaptive Fusion Model (STARFM) and the Enhanced-STARFM (ESTARFM). MDPI 2020-03-24 /pmc/articles/PMC7146212/ /pubmed/32213863 http://dx.doi.org/10.3390/s20061789 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Ge, Yanqin
Li, Yanrong
Chen, Jinyong
Sun, Kang
Li, Dacheng
Han, Qijin
A Learning-Enhanced Two-Pair Spatiotemporal Reflectance Fusion Model for GF-2 and GF-1 WFV Satellite Data
title A Learning-Enhanced Two-Pair Spatiotemporal Reflectance Fusion Model for GF-2 and GF-1 WFV Satellite Data
title_full A Learning-Enhanced Two-Pair Spatiotemporal Reflectance Fusion Model for GF-2 and GF-1 WFV Satellite Data
title_fullStr A Learning-Enhanced Two-Pair Spatiotemporal Reflectance Fusion Model for GF-2 and GF-1 WFV Satellite Data
title_full_unstemmed A Learning-Enhanced Two-Pair Spatiotemporal Reflectance Fusion Model for GF-2 and GF-1 WFV Satellite Data
title_short A Learning-Enhanced Two-Pair Spatiotemporal Reflectance Fusion Model for GF-2 and GF-1 WFV Satellite Data
title_sort learning-enhanced two-pair spatiotemporal reflectance fusion model for gf-2 and gf-1 wfv satellite data
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7146212/
https://www.ncbi.nlm.nih.gov/pubmed/32213863
http://dx.doi.org/10.3390/s20061789
work_keys_str_mv AT geyanqin alearningenhancedtwopairspatiotemporalreflectancefusionmodelforgf2andgf1wfvsatellitedata
AT liyanrong alearningenhancedtwopairspatiotemporalreflectancefusionmodelforgf2andgf1wfvsatellitedata
AT chenjinyong alearningenhancedtwopairspatiotemporalreflectancefusionmodelforgf2andgf1wfvsatellitedata
AT sunkang alearningenhancedtwopairspatiotemporalreflectancefusionmodelforgf2andgf1wfvsatellitedata
AT lidacheng alearningenhancedtwopairspatiotemporalreflectancefusionmodelforgf2andgf1wfvsatellitedata
AT hanqijin alearningenhancedtwopairspatiotemporalreflectancefusionmodelforgf2andgf1wfvsatellitedata
AT geyanqin learningenhancedtwopairspatiotemporalreflectancefusionmodelforgf2andgf1wfvsatellitedata
AT liyanrong learningenhancedtwopairspatiotemporalreflectancefusionmodelforgf2andgf1wfvsatellitedata
AT chenjinyong learningenhancedtwopairspatiotemporalreflectancefusionmodelforgf2andgf1wfvsatellitedata
AT sunkang learningenhancedtwopairspatiotemporalreflectancefusionmodelforgf2andgf1wfvsatellitedata
AT lidacheng learningenhancedtwopairspatiotemporalreflectancefusionmodelforgf2andgf1wfvsatellitedata
AT hanqijin learningenhancedtwopairspatiotemporalreflectancefusionmodelforgf2andgf1wfvsatellitedata