Cargando…
Effective Three-Stage Demosaicking Method for RGBW CFA Images Using The Iterative Error-Compensation Based Approach
As the color filter array (CFA)2.0, the RGBW CFA pattern, in which each CFA pixel contains only one R, G, B, or W color value, provides more luminance information than the Bayer CFA pattern. Demosaicking RGBW CFA images [Formula: see text] is necessary in order to provide high-quality RGB full-color...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7412501/ https://www.ncbi.nlm.nih.gov/pubmed/32674284 http://dx.doi.org/10.3390/s20143908 |
Sumario: | As the color filter array (CFA)2.0, the RGBW CFA pattern, in which each CFA pixel contains only one R, G, B, or W color value, provides more luminance information than the Bayer CFA pattern. Demosaicking RGBW CFA images [Formula: see text] is necessary in order to provide high-quality RGB full-color images as the target images for human perception. In this letter, we propose a three-stage demosaicking method for [Formula: see text]. In the first-stage, a cross shape-based color difference approach is proposed in order to interpolate the missing W color pixels in the W color plane of [Formula: see text]. In the second stage, an iterative error compensation-based demosaicking process is proposed to improve the quality of the demosaiced RGB full-color image. In the third stage, taking the input image [Formula: see text] as the ground truth RGBW CFA image, an [Formula: see text]-based refinement process is proposed to refine the quality of the demosaiced image obtained by the second stage. Based on the testing RGBW images that were collected from the Kodak and IMAX datasets, the comprehensive experimental results illustrated that the proposed three-stage demosaicking method achieves substantial quality and perceptual effect improvement relative to the previous method by Hamilton and Compton and the two state-of-the-art methods, Kwan et al.’s pansharpening-based method, and Kwan and Chou’s deep learning-based method. |
---|