Cargando…
What Does a Language-And-Vision Transformer See: The Impact of Semantic Information on Visual Representations
Neural networks have proven to be very successful in automatically capturing the composition of language and different structures across a range of multi-modal tasks. Thus, an important question to investigate is how neural networks learn and organise such structures. Numerous studies have examined...
Autores principales: | Ilinykh, Nikolai, Dobnik, Simon |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8679841/ https://www.ncbi.nlm.nih.gov/pubmed/34927063 http://dx.doi.org/10.3389/frai.2021.767971 |
Ejemplares similares
-
Interpreting vision and language generative models with semantic visual priors
por: Cafagna, Michele, et al.
Publicado: (2023) -
Semantic Representations for NLP Using VerbNet and the Generative Lexicon
por: Brown, Susan Windisch, et al.
Publicado: (2022) -
Exploring the Representations of Individual Entities in the Brain Combining EEG and Distributional Semantics
por: Bruera, Andrea, et al.
Publicado: (2022) -
Challenges and Prospects in Vision and Language Research
por: Kafle, Kushal, et al.
Publicado: (2019) -
Identification of offensive language in Urdu using semantic and embedding models
por: Hussain, Sajid, et al.
Publicado: (2022)