Cargando…

Language Models Explain Word Reading Times Better Than Empirical Predictability

Though there is a strong consensus that word length and frequency are the most important single-word features determining visual-orthographic access to the mental lexicon, there is less agreement as how to best capture syntactic and semantic factors. The traditional approach in cognitive reading res...

Descripción completa

Detalles Bibliográficos
Autores principales: Hofmann, Markus J., Remus, Steffen, Biemann, Chris, Radach, Ralph, Kuchinke, Lars
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8847793/
https://www.ncbi.nlm.nih.gov/pubmed/35187472
http://dx.doi.org/10.3389/frai.2021.730570
_version_ 1784652123292565504
author Hofmann, Markus J.
Remus, Steffen
Biemann, Chris
Radach, Ralph
Kuchinke, Lars
author_facet Hofmann, Markus J.
Remus, Steffen
Biemann, Chris
Radach, Ralph
Kuchinke, Lars
author_sort Hofmann, Markus J.
collection PubMed
description Though there is a strong consensus that word length and frequency are the most important single-word features determining visual-orthographic access to the mental lexicon, there is less agreement as how to best capture syntactic and semantic factors. The traditional approach in cognitive reading research assumes that word predictability from sentence context is best captured by cloze completion probability (CCP) derived from human performance data. We review recent research suggesting that probabilistic language models provide deeper explanations for syntactic and semantic effects than CCP. Then we compare CCP with three probabilistic language models for predicting word viewing times in an English and a German eye tracking sample: (1) Symbolic n-gram models consolidate syntactic and semantic short-range relations by computing the probability of a word to occur, given two preceding words. (2) Topic models rely on subsymbolic representations to capture long-range semantic similarity by word co-occurrence counts in documents. (3) In recurrent neural networks (RNNs), the subsymbolic units are trained to predict the next word, given all preceding words in the sentences. To examine lexical retrieval, these models were used to predict single fixation durations and gaze durations to capture rapidly successful and standard lexical access, and total viewing time to capture late semantic integration. The linear item-level analyses showed greater correlations of all language models with all eye-movement measures than CCP. Then we examined non-linear relations between the different types of predictability and the reading times using generalized additive models. N-gram and RNN probabilities of the present word more consistently predicted reading performance compared with topic models or CCP. For the effects of last-word probability on current-word viewing times, we obtained the best results with n-gram models. Such count-based models seem to best capture short-range access that is still underway when the eyes move on to the subsequent word. The prediction-trained RNN models, in contrast, better predicted early preprocessing of the next word. In sum, our results demonstrate that the different language models account for differential cognitive processes during reading. We discuss these algorithmically concrete blueprints of lexical consolidation as theoretically deep explanations for human reading.
format Online
Article
Text
id pubmed-8847793
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-88477932022-02-17 Language Models Explain Word Reading Times Better Than Empirical Predictability Hofmann, Markus J. Remus, Steffen Biemann, Chris Radach, Ralph Kuchinke, Lars Front Artif Intell Artificial Intelligence Though there is a strong consensus that word length and frequency are the most important single-word features determining visual-orthographic access to the mental lexicon, there is less agreement as how to best capture syntactic and semantic factors. The traditional approach in cognitive reading research assumes that word predictability from sentence context is best captured by cloze completion probability (CCP) derived from human performance data. We review recent research suggesting that probabilistic language models provide deeper explanations for syntactic and semantic effects than CCP. Then we compare CCP with three probabilistic language models for predicting word viewing times in an English and a German eye tracking sample: (1) Symbolic n-gram models consolidate syntactic and semantic short-range relations by computing the probability of a word to occur, given two preceding words. (2) Topic models rely on subsymbolic representations to capture long-range semantic similarity by word co-occurrence counts in documents. (3) In recurrent neural networks (RNNs), the subsymbolic units are trained to predict the next word, given all preceding words in the sentences. To examine lexical retrieval, these models were used to predict single fixation durations and gaze durations to capture rapidly successful and standard lexical access, and total viewing time to capture late semantic integration. The linear item-level analyses showed greater correlations of all language models with all eye-movement measures than CCP. Then we examined non-linear relations between the different types of predictability and the reading times using generalized additive models. N-gram and RNN probabilities of the present word more consistently predicted reading performance compared with topic models or CCP. For the effects of last-word probability on current-word viewing times, we obtained the best results with n-gram models. Such count-based models seem to best capture short-range access that is still underway when the eyes move on to the subsequent word. The prediction-trained RNN models, in contrast, better predicted early preprocessing of the next word. In sum, our results demonstrate that the different language models account for differential cognitive processes during reading. We discuss these algorithmically concrete blueprints of lexical consolidation as theoretically deep explanations for human reading. Frontiers Media S.A. 2022-02-02 /pmc/articles/PMC8847793/ /pubmed/35187472 http://dx.doi.org/10.3389/frai.2021.730570 Text en Copyright © 2022 Hofmann, Remus, Biemann, Radach and Kuchinke. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Artificial Intelligence
Hofmann, Markus J.
Remus, Steffen
Biemann, Chris
Radach, Ralph
Kuchinke, Lars
Language Models Explain Word Reading Times Better Than Empirical Predictability
title Language Models Explain Word Reading Times Better Than Empirical Predictability
title_full Language Models Explain Word Reading Times Better Than Empirical Predictability
title_fullStr Language Models Explain Word Reading Times Better Than Empirical Predictability
title_full_unstemmed Language Models Explain Word Reading Times Better Than Empirical Predictability
title_short Language Models Explain Word Reading Times Better Than Empirical Predictability
title_sort language models explain word reading times better than empirical predictability
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8847793/
https://www.ncbi.nlm.nih.gov/pubmed/35187472
http://dx.doi.org/10.3389/frai.2021.730570
work_keys_str_mv AT hofmannmarkusj languagemodelsexplainwordreadingtimesbetterthanempiricalpredictability
AT remussteffen languagemodelsexplainwordreadingtimesbetterthanempiricalpredictability
AT biemannchris languagemodelsexplainwordreadingtimesbetterthanempiricalpredictability
AT radachralph languagemodelsexplainwordreadingtimesbetterthanempiricalpredictability
AT kuchinkelars languagemodelsexplainwordreadingtimesbetterthanempiricalpredictability