Cargando…

Neighboring Algorithm for Visual Semantic Analysis toward GAN-Generated Pictures

Generative adversarial network (GAN)-guided visual quality evaluation means scoring GAN-propagated portraits to quantify the degree of visual distortions. In general, there are very few image- and character-evaluation algorithms generated by GAN, and the algorithm's athletic ability is not capa...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhang, Lu-Ming, Sheng, Yichuan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Hindawi 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9525754/
https://www.ncbi.nlm.nih.gov/pubmed/36193335
http://dx.doi.org/10.1155/2022/2188152
Descripción
Sumario:Generative adversarial network (GAN)-guided visual quality evaluation means scoring GAN-propagated portraits to quantify the degree of visual distortions. In general, there are very few image- and character-evaluation algorithms generated by GAN, and the algorithm's athletic ability is not capable. In this article, we proposed a novel image ranking algorithm based on the nearest neighbor algorithm. It can obtain automatic and extrinsic evaluation of GAN procreate images using an efficient evaluation technique. First, with the support of the artificial neural network, the boundaries of the variety images are extracted to form a homogeneous portrait candidate pool, based on which the comparison of product copies is restricted. Subsequently, with the support of the K-nearest neighbors algorithm, from the unified similarity candidate pool, we extract the most similar concept of K-Emperor to the generated portrait and calculate the portrait quality score accordingly. Finally, the property of generative similarity that produced by the GAN models are trained on a variety of classical datasets. Comprehensive experimental results have shown that our algorithm substantially improves the efficiency and accuracy of the natural evaluation of pictures generated by GAN. The calculated metric is only 1/9–1/28 compared to the other methods. Meanwhile, the objective evaluation of the GAN and human consistency has increased by more than 80% in line with human visual perception.