Cargando…

PLG-ViT: Vision Transformer with Parallel Local and Global Self-Attention

Recently, transformer architectures have shown superior performance compared to their CNN counterparts in many computer vision tasks. The self-attention mechanism enables transformer networks to connect visual dependencies over short as well as long distances, thus generating a large, sometimes even...

Descripción completa

Detalles Bibliográficos
Autores principales: Ebert, Nikolas, Stricker, Didier, Wasenmüller, Oliver
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10098752/
https://www.ncbi.nlm.nih.gov/pubmed/37050507
http://dx.doi.org/10.3390/s23073447