Cargando…

No-Reference Quality Assessment for 3D Synthesized Images Based on Visual-Entropy-Guided Multi-Layer Features Analysis

Multiview video plus depth is one of the mainstream representations of 3D scenes in emerging free viewpoint video, which generates virtual 3D synthesized images through a depth-image-based-rendering (DIBR) technique. However, the inaccuracy of depth maps and imperfect DIBR techniques result in diffe...

Descripción completa

Detalles Bibliográficos
Autores principales: Jin, Chongchong, Peng, Zongju, Zou, Wenhui, Chen, Fen, Jiang, Gangyi, Yu, Mei
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8233917/
https://www.ncbi.nlm.nih.gov/pubmed/34207229
http://dx.doi.org/10.3390/e23060770
_version_ 1783713961608216576
author Jin, Chongchong
Peng, Zongju
Zou, Wenhui
Chen, Fen
Jiang, Gangyi
Yu, Mei
author_facet Jin, Chongchong
Peng, Zongju
Zou, Wenhui
Chen, Fen
Jiang, Gangyi
Yu, Mei
author_sort Jin, Chongchong
collection PubMed
description Multiview video plus depth is one of the mainstream representations of 3D scenes in emerging free viewpoint video, which generates virtual 3D synthesized images through a depth-image-based-rendering (DIBR) technique. However, the inaccuracy of depth maps and imperfect DIBR techniques result in different geometric distortions that seriously deteriorate the users’ visual perception. An effective 3D synthesized image quality assessment (IQA) metric can simulate human visual perception and determine the application feasibility of the synthesized content. In this paper, a no-reference IQA metric based on visual-entropy-guided multi-layer features analysis for 3D synthesized images is proposed. According to the energy entropy, the geometric distortions are divided into two visual attention layers, namely, bottom-up layer and top-down layer. The feature of salient distortion is measured by regional proportion plus transition threshold on a bottom-up layer. In parallel, the key distribution regions of insignificant geometric distortion are extracted by a relative total variation model, and the features of these distortions are measured by the interaction of decentralized attention and concentrated attention on top-down layers. By integrating the features of both bottom-up and top-down layers, a more visually perceptive quality evaluation model is built. Experimental results show that the proposed method is superior to the state-of-the-art in assessing the quality of 3D synthesized images.
format Online
Article
Text
id pubmed-8233917
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-82339172021-06-27 No-Reference Quality Assessment for 3D Synthesized Images Based on Visual-Entropy-Guided Multi-Layer Features Analysis Jin, Chongchong Peng, Zongju Zou, Wenhui Chen, Fen Jiang, Gangyi Yu, Mei Entropy (Basel) Article Multiview video plus depth is one of the mainstream representations of 3D scenes in emerging free viewpoint video, which generates virtual 3D synthesized images through a depth-image-based-rendering (DIBR) technique. However, the inaccuracy of depth maps and imperfect DIBR techniques result in different geometric distortions that seriously deteriorate the users’ visual perception. An effective 3D synthesized image quality assessment (IQA) metric can simulate human visual perception and determine the application feasibility of the synthesized content. In this paper, a no-reference IQA metric based on visual-entropy-guided multi-layer features analysis for 3D synthesized images is proposed. According to the energy entropy, the geometric distortions are divided into two visual attention layers, namely, bottom-up layer and top-down layer. The feature of salient distortion is measured by regional proportion plus transition threshold on a bottom-up layer. In parallel, the key distribution regions of insignificant geometric distortion are extracted by a relative total variation model, and the features of these distortions are measured by the interaction of decentralized attention and concentrated attention on top-down layers. By integrating the features of both bottom-up and top-down layers, a more visually perceptive quality evaluation model is built. Experimental results show that the proposed method is superior to the state-of-the-art in assessing the quality of 3D synthesized images. MDPI 2021-06-18 /pmc/articles/PMC8233917/ /pubmed/34207229 http://dx.doi.org/10.3390/e23060770 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Jin, Chongchong
Peng, Zongju
Zou, Wenhui
Chen, Fen
Jiang, Gangyi
Yu, Mei
No-Reference Quality Assessment for 3D Synthesized Images Based on Visual-Entropy-Guided Multi-Layer Features Analysis
title No-Reference Quality Assessment for 3D Synthesized Images Based on Visual-Entropy-Guided Multi-Layer Features Analysis
title_full No-Reference Quality Assessment for 3D Synthesized Images Based on Visual-Entropy-Guided Multi-Layer Features Analysis
title_fullStr No-Reference Quality Assessment for 3D Synthesized Images Based on Visual-Entropy-Guided Multi-Layer Features Analysis
title_full_unstemmed No-Reference Quality Assessment for 3D Synthesized Images Based on Visual-Entropy-Guided Multi-Layer Features Analysis
title_short No-Reference Quality Assessment for 3D Synthesized Images Based on Visual-Entropy-Guided Multi-Layer Features Analysis
title_sort no-reference quality assessment for 3d synthesized images based on visual-entropy-guided multi-layer features analysis
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8233917/
https://www.ncbi.nlm.nih.gov/pubmed/34207229
http://dx.doi.org/10.3390/e23060770
work_keys_str_mv AT jinchongchong noreferencequalityassessmentfor3dsynthesizedimagesbasedonvisualentropyguidedmultilayerfeaturesanalysis
AT pengzongju noreferencequalityassessmentfor3dsynthesizedimagesbasedonvisualentropyguidedmultilayerfeaturesanalysis
AT zouwenhui noreferencequalityassessmentfor3dsynthesizedimagesbasedonvisualentropyguidedmultilayerfeaturesanalysis
AT chenfen noreferencequalityassessmentfor3dsynthesizedimagesbasedonvisualentropyguidedmultilayerfeaturesanalysis
AT jianggangyi noreferencequalityassessmentfor3dsynthesizedimagesbasedonvisualentropyguidedmultilayerfeaturesanalysis
AT yumei noreferencequalityassessmentfor3dsynthesizedimagesbasedonvisualentropyguidedmultilayerfeaturesanalysis