Cargando…
Investigating a neural language model’s replicability of psycholinguistic experiments: A case study of NPI licensing
The recent success of deep learning neural language models such as Bidirectional Encoder Representations from Transformers (BERT) has brought innovations to computational language research. The present study explores the possibility of using a language model in investigating human language processes...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9995786/ https://www.ncbi.nlm.nih.gov/pubmed/36910779 http://dx.doi.org/10.3389/fpsyg.2023.937656 |
_version_ | 1784902895788883968 |
---|---|
author | Shin, Unsub Yi, Eunkyung Song, Sanghoun |
author_facet | Shin, Unsub Yi, Eunkyung Song, Sanghoun |
author_sort | Shin, Unsub |
collection | PubMed |
description | The recent success of deep learning neural language models such as Bidirectional Encoder Representations from Transformers (BERT) has brought innovations to computational language research. The present study explores the possibility of using a language model in investigating human language processes, based on the case study of negative polarity items (NPIs). We first conducted an experiment with BERT to examine whether the model successfully captures the hierarchical structural relationship between an NPI and its licensor and whether it may lead to an error analogous to the grammatical illusion shown in the psycholinguistic experiment (Experiment 1). We also investigated whether the language model can capture the fine-grained semantic properties of NPI licensors and discriminate their subtle differences on the scale of licensing strengths (Experiment 2). The results of the two experiments suggest that overall, the neural language model is highly sensitive to both syntactic and semantic constraints in NPI processing. The model’s processing patterns and sensitivities are shown to be very close to humans, suggesting their role as a research tool or object in the study of language. |
format | Online Article Text |
id | pubmed-9995786 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-99957862023-03-10 Investigating a neural language model’s replicability of psycholinguistic experiments: A case study of NPI licensing Shin, Unsub Yi, Eunkyung Song, Sanghoun Front Psychol Psychology The recent success of deep learning neural language models such as Bidirectional Encoder Representations from Transformers (BERT) has brought innovations to computational language research. The present study explores the possibility of using a language model in investigating human language processes, based on the case study of negative polarity items (NPIs). We first conducted an experiment with BERT to examine whether the model successfully captures the hierarchical structural relationship between an NPI and its licensor and whether it may lead to an error analogous to the grammatical illusion shown in the psycholinguistic experiment (Experiment 1). We also investigated whether the language model can capture the fine-grained semantic properties of NPI licensors and discriminate their subtle differences on the scale of licensing strengths (Experiment 2). The results of the two experiments suggest that overall, the neural language model is highly sensitive to both syntactic and semantic constraints in NPI processing. The model’s processing patterns and sensitivities are shown to be very close to humans, suggesting their role as a research tool or object in the study of language. Frontiers Media S.A. 2023-02-23 /pmc/articles/PMC9995786/ /pubmed/36910779 http://dx.doi.org/10.3389/fpsyg.2023.937656 Text en Copyright © 2023 Shin, Yi and Song. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Psychology Shin, Unsub Yi, Eunkyung Song, Sanghoun Investigating a neural language model’s replicability of psycholinguistic experiments: A case study of NPI licensing |
title | Investigating a neural language model’s replicability of psycholinguistic experiments: A case study of NPI licensing |
title_full | Investigating a neural language model’s replicability of psycholinguistic experiments: A case study of NPI licensing |
title_fullStr | Investigating a neural language model’s replicability of psycholinguistic experiments: A case study of NPI licensing |
title_full_unstemmed | Investigating a neural language model’s replicability of psycholinguistic experiments: A case study of NPI licensing |
title_short | Investigating a neural language model’s replicability of psycholinguistic experiments: A case study of NPI licensing |
title_sort | investigating a neural language model’s replicability of psycholinguistic experiments: a case study of npi licensing |
topic | Psychology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9995786/ https://www.ncbi.nlm.nih.gov/pubmed/36910779 http://dx.doi.org/10.3389/fpsyg.2023.937656 |
work_keys_str_mv | AT shinunsub investigatinganeurallanguagemodelsreplicabilityofpsycholinguisticexperimentsacasestudyofnpilicensing AT yieunkyung investigatinganeurallanguagemodelsreplicabilityofpsycholinguisticexperimentsacasestudyofnpilicensing AT songsanghoun investigatinganeurallanguagemodelsreplicabilityofpsycholinguisticexperimentsacasestudyofnpilicensing |