Cargando…

CLIP-Driven Prototype Network for Few-Shot Semantic Segmentation

Recent research has shown that visual–text pretrained models perform well in traditional vision tasks. CLIP, as the most influential work, has garnered significant attention from researchers. Thanks to its excellent visual representation capabilities, many recent studies have used CLIP for pixel-lev...

Descripción completa

Detalles Bibliográficos
Autores principales: Guo, Shi-Cheng, Liu, Shang-Kun, Wang, Jing-Yu, Zheng, Wei-Min, Jiang, Cheng-Yu
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10529322/
https://www.ncbi.nlm.nih.gov/pubmed/37761652
http://dx.doi.org/10.3390/e25091353