Cargando…

S2A: Scale-Attention-Aware Networks for Video Super-Resolution

Convolutional Neural Networks (CNNs) have been widely used in video super-resolution (VSR). Most existing VSR methods focus on how to utilize the information of multiple frames, while neglecting the feature correlations of the intermediate features, thus limiting the feature expression of the models...

Descripción completa

Detalles Bibliográficos
Autores principales: Guo, Taian, Dai, Tao, Liu, Ling, Zhu, Zexuan, Xia, Shu-Tao
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8619237/
https://www.ncbi.nlm.nih.gov/pubmed/34828096
http://dx.doi.org/10.3390/e23111398
Descripción
Sumario:Convolutional Neural Networks (CNNs) have been widely used in video super-resolution (VSR). Most existing VSR methods focus on how to utilize the information of multiple frames, while neglecting the feature correlations of the intermediate features, thus limiting the feature expression of the models. To address this problem, we propose a novel SAA network, that is, Scale-and-Attention-Aware Networks, to apply different attention to different temporal-length streams, while further exploring both spatial and channel attention on separate streams with a newly proposed Criss-Cross Channel Attention Module ([Formula: see text]). Experiments on public VSR datasets demonstrate the superiority of our method over other state-of-the-art methods in terms of both quantitative and qualitative metrics.