Cargando…
Artificial Intelligence Models Do Not Ground Negation, Humans Do. GuessWhat?! Dialogues as a Case Study
Negation is widely present in human communication, yet it is largely neglected in the research on conversational agents based on neural network architectures. Cognitive studies show that a supportive visual context makes the processing of negation easier. We take GuessWhat?!, a referential visually...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8819179/ https://www.ncbi.nlm.nih.gov/pubmed/35141519 http://dx.doi.org/10.3389/fdata.2021.736709 |
_version_ | 1784646000585998336 |
---|---|
author | Testoni, Alberto Greco, Claudio Bernardi, Raffaella |
author_facet | Testoni, Alberto Greco, Claudio Bernardi, Raffaella |
author_sort | Testoni, Alberto |
collection | PubMed |
description | Negation is widely present in human communication, yet it is largely neglected in the research on conversational agents based on neural network architectures. Cognitive studies show that a supportive visual context makes the processing of negation easier. We take GuessWhat?!, a referential visually grounded guessing game, as test-bed and evaluate to which extent guessers based on pre-trained language models profit from negatively answered polar questions. Moreover, to get a better grasp of models' results, we select a controlled sample of games and run a crowdsourcing experiment with subjects. We evaluate models and humans against the same settings and use the comparison to better interpret the models' results. We show that while humans profit from negatively answered questions to solve the task, models struggle in grounding negation, and some of them barely use it; however, when the language signal is poorly informative, visual features help encoding the negative information. Finally, the experiments with human subjects put us in the position of comparing humans and models' predictions and get a grasp about which models make errors that are more human-like and as such more plausible. |
format | Online Article Text |
id | pubmed-8819179 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-88191792022-02-08 Artificial Intelligence Models Do Not Ground Negation, Humans Do. GuessWhat?! Dialogues as a Case Study Testoni, Alberto Greco, Claudio Bernardi, Raffaella Front Big Data Big Data Negation is widely present in human communication, yet it is largely neglected in the research on conversational agents based on neural network architectures. Cognitive studies show that a supportive visual context makes the processing of negation easier. We take GuessWhat?!, a referential visually grounded guessing game, as test-bed and evaluate to which extent guessers based on pre-trained language models profit from negatively answered polar questions. Moreover, to get a better grasp of models' results, we select a controlled sample of games and run a crowdsourcing experiment with subjects. We evaluate models and humans against the same settings and use the comparison to better interpret the models' results. We show that while humans profit from negatively answered questions to solve the task, models struggle in grounding negation, and some of them barely use it; however, when the language signal is poorly informative, visual features help encoding the negative information. Finally, the experiments with human subjects put us in the position of comparing humans and models' predictions and get a grasp about which models make errors that are more human-like and as such more plausible. Frontiers Media S.A. 2022-01-24 /pmc/articles/PMC8819179/ /pubmed/35141519 http://dx.doi.org/10.3389/fdata.2021.736709 Text en Copyright © 2022 Testoni, Greco and Bernardi. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Big Data Testoni, Alberto Greco, Claudio Bernardi, Raffaella Artificial Intelligence Models Do Not Ground Negation, Humans Do. GuessWhat?! Dialogues as a Case Study |
title | Artificial Intelligence Models Do Not Ground Negation, Humans Do. GuessWhat?! Dialogues as a Case Study |
title_full | Artificial Intelligence Models Do Not Ground Negation, Humans Do. GuessWhat?! Dialogues as a Case Study |
title_fullStr | Artificial Intelligence Models Do Not Ground Negation, Humans Do. GuessWhat?! Dialogues as a Case Study |
title_full_unstemmed | Artificial Intelligence Models Do Not Ground Negation, Humans Do. GuessWhat?! Dialogues as a Case Study |
title_short | Artificial Intelligence Models Do Not Ground Negation, Humans Do. GuessWhat?! Dialogues as a Case Study |
title_sort | artificial intelligence models do not ground negation, humans do. guesswhat?! dialogues as a case study |
topic | Big Data |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8819179/ https://www.ncbi.nlm.nih.gov/pubmed/35141519 http://dx.doi.org/10.3389/fdata.2021.736709 |
work_keys_str_mv | AT testonialberto artificialintelligencemodelsdonotgroundnegationhumansdoguesswhatdialoguesasacasestudy AT grecoclaudio artificialintelligencemodelsdonotgroundnegationhumansdoguesswhatdialoguesasacasestudy AT bernardiraffaella artificialintelligencemodelsdonotgroundnegationhumansdoguesswhatdialoguesasacasestudy |