Cargando…
Blind Quality Prediction for View Synthesis Based on Heterogeneous Distortion Perception
The quality of synthesized images directly affects the practical application of virtual view synthesis technology, which typically uses a depth-image-based rendering (DIBR) algorithm to generate a new viewpoint based on texture and depth images. Current view synthesis quality metrics commonly evalua...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9504726/ https://www.ncbi.nlm.nih.gov/pubmed/36146438 http://dx.doi.org/10.3390/s22187081 |
_version_ | 1784796289398996992 |
---|---|
author | Shi, Haozhi Wang, Lanmei Wang, Guibao |
author_facet | Shi, Haozhi Wang, Lanmei Wang, Guibao |
author_sort | Shi, Haozhi |
collection | PubMed |
description | The quality of synthesized images directly affects the practical application of virtual view synthesis technology, which typically uses a depth-image-based rendering (DIBR) algorithm to generate a new viewpoint based on texture and depth images. Current view synthesis quality metrics commonly evaluate the quality of DIBR-synthesized images, where the DIBR process is computationally expensive and time-consuming. In addition, the existing view synthesis quality metrics cannot achieve robustness due to the shallow hand-crafted features. To avoid the complicated DIBR process and learn more efficient features, this paper presents a blind quality prediction model for view synthesis based on HEterogeneous DIstortion Perception, dubbed HEDIP, which predicts the image quality of view synthesis from texture and depth images. Specifically, the texture and depth images are first fused based on discrete cosine transform to simulate the distortion of view synthesis images, and then the spatial and gradient domain features are extracted in a Two-Channel Convolutional Neural Network (TCCNN). Finally, a fully connected layer maps the extracted features to a quality score. Notably, the ground-truth score of the source image cannot effectively represent the labels of each image patch during training due to the presence of local distortions in view synthesis image. So, we design a Heterogeneous Distortion Perception (HDP) module to provide effective training labels for each image patch. Experiments show that with the help of the HDP module, the proposed model can effectively predict the quality of view synthesis. Experimental results demonstrate the effectiveness of the proposed model. |
format | Online Article Text |
id | pubmed-9504726 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-95047262022-09-24 Blind Quality Prediction for View Synthesis Based on Heterogeneous Distortion Perception Shi, Haozhi Wang, Lanmei Wang, Guibao Sensors (Basel) Article The quality of synthesized images directly affects the practical application of virtual view synthesis technology, which typically uses a depth-image-based rendering (DIBR) algorithm to generate a new viewpoint based on texture and depth images. Current view synthesis quality metrics commonly evaluate the quality of DIBR-synthesized images, where the DIBR process is computationally expensive and time-consuming. In addition, the existing view synthesis quality metrics cannot achieve robustness due to the shallow hand-crafted features. To avoid the complicated DIBR process and learn more efficient features, this paper presents a blind quality prediction model for view synthesis based on HEterogeneous DIstortion Perception, dubbed HEDIP, which predicts the image quality of view synthesis from texture and depth images. Specifically, the texture and depth images are first fused based on discrete cosine transform to simulate the distortion of view synthesis images, and then the spatial and gradient domain features are extracted in a Two-Channel Convolutional Neural Network (TCCNN). Finally, a fully connected layer maps the extracted features to a quality score. Notably, the ground-truth score of the source image cannot effectively represent the labels of each image patch during training due to the presence of local distortions in view synthesis image. So, we design a Heterogeneous Distortion Perception (HDP) module to provide effective training labels for each image patch. Experiments show that with the help of the HDP module, the proposed model can effectively predict the quality of view synthesis. Experimental results demonstrate the effectiveness of the proposed model. MDPI 2022-09-19 /pmc/articles/PMC9504726/ /pubmed/36146438 http://dx.doi.org/10.3390/s22187081 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Shi, Haozhi Wang, Lanmei Wang, Guibao Blind Quality Prediction for View Synthesis Based on Heterogeneous Distortion Perception |
title | Blind Quality Prediction for View Synthesis Based on Heterogeneous Distortion Perception |
title_full | Blind Quality Prediction for View Synthesis Based on Heterogeneous Distortion Perception |
title_fullStr | Blind Quality Prediction for View Synthesis Based on Heterogeneous Distortion Perception |
title_full_unstemmed | Blind Quality Prediction for View Synthesis Based on Heterogeneous Distortion Perception |
title_short | Blind Quality Prediction for View Synthesis Based on Heterogeneous Distortion Perception |
title_sort | blind quality prediction for view synthesis based on heterogeneous distortion perception |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9504726/ https://www.ncbi.nlm.nih.gov/pubmed/36146438 http://dx.doi.org/10.3390/s22187081 |
work_keys_str_mv | AT shihaozhi blindqualitypredictionforviewsynthesisbasedonheterogeneousdistortionperception AT wanglanmei blindqualitypredictionforviewsynthesisbasedonheterogeneousdistortionperception AT wangguibao blindqualitypredictionforviewsynthesisbasedonheterogeneousdistortionperception |