Cargando…

Image Compression Based on Hybrid Domain Attention and Postprocessing Enhancement

Deep learning-based image compression methods have made significant achievements recently, of which the two key components are the entropy model for latent representations and the encoder-decoder network. Both the inaccurate estimation of the entropy estimation model and the existence of information...

Descripción completa

Detalles Bibliográficos
Autores principales: Bao, Yuting, Tao, Yuwen, Qian, Pengjiang
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Hindawi 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8947896/
https://www.ncbi.nlm.nih.gov/pubmed/35341171
http://dx.doi.org/10.1155/2022/4926124
Descripción
Sumario:Deep learning-based image compression methods have made significant achievements recently, of which the two key components are the entropy model for latent representations and the encoder-decoder network. Both the inaccurate estimation of the entropy estimation model and the existence of information redundancy in latent representations lead to a reduction in the compression efficiency. To address these issues, the study suggests an image compression method based on a hybrid domain attention mechanism and postprocessing improvement. This study embeds hybrid domain attention modules as nonlinear transformers in both the main encoder-decoder network and the hyperprior network, aiming at constructing more compact latent features and hyperpriors and then model the latent features as parametric Gaussian-scale mixture models to obtain more precise entropy estimation. In addition, we propose a solution to the errors introduced by quantization in image compression by adding an inverse quantization module. On the decoding side, we also provide a postprocessing enhancement module to further increase image compression performance. The experimental results show that the peak signal-to-noise rate (PSNR) and multiscale structural similarity (MS-SSIM) of the proposed method are higher than those of traditional compression methods and advanced neural network-based methods.