Cargando…
Critical Analysis of Deconfounded Pretraining to Improve Visio-Linguistic Models
An important problem with many current visio-linguistic models is that they often depend on spurious correlations. A typical example of a spurious correlation between two variables is one that is due to a third variable causing both (a “confounder”). Recent work has addressed this by adjusting for s...
Autores principales: | Cornille, Nathan, Laenen, Katrien, Moens, Marie-Francine |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8993511/ https://www.ncbi.nlm.nih.gov/pubmed/35402901 http://dx.doi.org/10.3389/frai.2022.736791 |
Ejemplares similares
-
Explaining pretrained language models' understanding of linguistic structures using construction grammar
por: Weissweiler, Leonie, et al.
Publicado: (2023) -
Sequence-to-sequence pretraining for a less-resourced Slovenian language
por: Ulčar, Matej, et al.
Publicado: (2023) -
Lung Cancer Segmentation With Transfer Learning: Usefulness of a Pretrained Model Constructed From an Artificial Dataset Generated Using a Generative Adversarial Network
por: Nishio, Mizuho, et al.
Publicado: (2021) -
A Framework for the Computational Linguistic Analysis of Dehumanization
por: Mendelsohn, Julia, et al.
Publicado: (2020) -
The Future of Computational Linguistics: On Beyond Alchemy
por: Church, Kenneth, et al.
Publicado: (2021)