Cargando…
A cognitive modeling approach to learning and using reference biases in language
During real-time language processing, people rely on linguistic and non-linguistic biases to anticipate upcoming linguistic input. One of these linguistic biases is known as the implicit causality bias, wherein language users anticipate that certain entities will be rementioned in the discourse base...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9709269/ https://www.ncbi.nlm.nih.gov/pubmed/36467560 http://dx.doi.org/10.3389/frai.2022.933504 |
_version_ | 1784841112465178624 |
---|---|
author | Toth, Abigail G. Hendriks, Petra Taatgen, Niels A. van Rij, Jacolien |
author_facet | Toth, Abigail G. Hendriks, Petra Taatgen, Niels A. van Rij, Jacolien |
author_sort | Toth, Abigail G. |
collection | PubMed |
description | During real-time language processing, people rely on linguistic and non-linguistic biases to anticipate upcoming linguistic input. One of these linguistic biases is known as the implicit causality bias, wherein language users anticipate that certain entities will be rementioned in the discourse based on the entity's particular role in an expressed causal event. For example, when language users encounter a sentence like “Elizabeth congratulated Tina…” during real-time language processing, they seemingly anticipate that the discourse will continue about Tina, the object referent, rather than Elizabeth, the subject referent. However, it is often unclear how these reference biases are acquired and how exactly they get used during real-time language processing. In order to investigate these questions, we developed a reference learning model within the PRIMs cognitive architecture that simulated the process of predicting upcoming discourse referents and their linguistic forms. Crucially, across the linguistic input the model was presented with, there were asymmetries with respect to how the discourse continued. By utilizing the learning mechanisms of the PRIMs architecture, the model was able to optimize its predictions, ultimately leading to biased model behavior. More specifically, following subject-biased implicit causality verbs the model was more likely to predict that the discourse would continue about the subject referent, whereas following object-biased implicit causality verbs the model was more likely to predict that the discourse would continue about the object referent. In a similar fashion, the model was more likely to predict that subject referent continuations would be in the form of a pronoun, whereas object referent continuations would be in the form of a proper name. These learned biases were also shown to generalize to novel contexts in which either the verb or the subject and object referents were new. The results of the present study demonstrate that seemingly complex linguistic behavior can be explained by cognitively plausible domain-general learning mechanisms. This study has implications for psycholinguistic accounts of predictive language processing and language learning, as well as for theories of implicit causality and reference processing. |
format | Online Article Text |
id | pubmed-9709269 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-97092692022-12-01 A cognitive modeling approach to learning and using reference biases in language Toth, Abigail G. Hendriks, Petra Taatgen, Niels A. van Rij, Jacolien Front Artif Intell Artificial Intelligence During real-time language processing, people rely on linguistic and non-linguistic biases to anticipate upcoming linguistic input. One of these linguistic biases is known as the implicit causality bias, wherein language users anticipate that certain entities will be rementioned in the discourse based on the entity's particular role in an expressed causal event. For example, when language users encounter a sentence like “Elizabeth congratulated Tina…” during real-time language processing, they seemingly anticipate that the discourse will continue about Tina, the object referent, rather than Elizabeth, the subject referent. However, it is often unclear how these reference biases are acquired and how exactly they get used during real-time language processing. In order to investigate these questions, we developed a reference learning model within the PRIMs cognitive architecture that simulated the process of predicting upcoming discourse referents and their linguistic forms. Crucially, across the linguistic input the model was presented with, there were asymmetries with respect to how the discourse continued. By utilizing the learning mechanisms of the PRIMs architecture, the model was able to optimize its predictions, ultimately leading to biased model behavior. More specifically, following subject-biased implicit causality verbs the model was more likely to predict that the discourse would continue about the subject referent, whereas following object-biased implicit causality verbs the model was more likely to predict that the discourse would continue about the object referent. In a similar fashion, the model was more likely to predict that subject referent continuations would be in the form of a pronoun, whereas object referent continuations would be in the form of a proper name. These learned biases were also shown to generalize to novel contexts in which either the verb or the subject and object referents were new. The results of the present study demonstrate that seemingly complex linguistic behavior can be explained by cognitively plausible domain-general learning mechanisms. This study has implications for psycholinguistic accounts of predictive language processing and language learning, as well as for theories of implicit causality and reference processing. Frontiers Media S.A. 2022-11-16 /pmc/articles/PMC9709269/ /pubmed/36467560 http://dx.doi.org/10.3389/frai.2022.933504 Text en Copyright © 2022 Toth, Hendriks, Taatgen and van Rij. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Artificial Intelligence Toth, Abigail G. Hendriks, Petra Taatgen, Niels A. van Rij, Jacolien A cognitive modeling approach to learning and using reference biases in language |
title | A cognitive modeling approach to learning and using reference biases in language |
title_full | A cognitive modeling approach to learning and using reference biases in language |
title_fullStr | A cognitive modeling approach to learning and using reference biases in language |
title_full_unstemmed | A cognitive modeling approach to learning and using reference biases in language |
title_short | A cognitive modeling approach to learning and using reference biases in language |
title_sort | cognitive modeling approach to learning and using reference biases in language |
topic | Artificial Intelligence |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9709269/ https://www.ncbi.nlm.nih.gov/pubmed/36467560 http://dx.doi.org/10.3389/frai.2022.933504 |
work_keys_str_mv | AT tothabigailg acognitivemodelingapproachtolearningandusingreferencebiasesinlanguage AT hendrikspetra acognitivemodelingapproachtolearningandusingreferencebiasesinlanguage AT taatgennielsa acognitivemodelingapproachtolearningandusingreferencebiasesinlanguage AT vanrijjacolien acognitivemodelingapproachtolearningandusingreferencebiasesinlanguage AT tothabigailg cognitivemodelingapproachtolearningandusingreferencebiasesinlanguage AT hendrikspetra cognitivemodelingapproachtolearningandusingreferencebiasesinlanguage AT taatgennielsa cognitivemodelingapproachtolearningandusingreferencebiasesinlanguage AT vanrijjacolien cognitivemodelingapproachtolearningandusingreferencebiasesinlanguage |