Cargando…
Deep language algorithms predict semantic comprehension from brain activity
Deep language algorithms, like GPT-2, have demonstrated remarkable abilities to process text, and now constitute the backbone of automatic translation, summarization and dialogue. However, whether these models encode information that relates to human comprehension still remains controversial. Here,...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9522791/ https://www.ncbi.nlm.nih.gov/pubmed/36175483 http://dx.doi.org/10.1038/s41598-022-20460-9 |
_version_ | 1784800134160187392 |
---|---|
author | Caucheteux, Charlotte Gramfort, Alexandre King, Jean-Rémi |
author_facet | Caucheteux, Charlotte Gramfort, Alexandre King, Jean-Rémi |
author_sort | Caucheteux, Charlotte |
collection | PubMed |
description | Deep language algorithms, like GPT-2, have demonstrated remarkable abilities to process text, and now constitute the backbone of automatic translation, summarization and dialogue. However, whether these models encode information that relates to human comprehension still remains controversial. Here, we show that the representations of GPT-2 not only map onto the brain responses to spoken stories, but they also predict the extent to which subjects understand the corresponding narratives. To this end, we analyze 101 subjects recorded with functional Magnetic Resonance Imaging while listening to 70 min of short stories. We then fit a linear mapping model to predict brain activity from GPT-2’s activations. Finally, we show that this mapping reliably correlates ([Formula: see text] ) with subjects’ comprehension scores as assessed for each story. This effect peaks in the angular, medial temporal and supra-marginal gyri, and is best accounted for by the long-distance dependencies generated in the deep layers of GPT-2. Overall, this study shows how deep language models help clarify the brain computations underlying language comprehension. |
format | Online Article Text |
id | pubmed-9522791 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-95227912022-10-01 Deep language algorithms predict semantic comprehension from brain activity Caucheteux, Charlotte Gramfort, Alexandre King, Jean-Rémi Sci Rep Article Deep language algorithms, like GPT-2, have demonstrated remarkable abilities to process text, and now constitute the backbone of automatic translation, summarization and dialogue. However, whether these models encode information that relates to human comprehension still remains controversial. Here, we show that the representations of GPT-2 not only map onto the brain responses to spoken stories, but they also predict the extent to which subjects understand the corresponding narratives. To this end, we analyze 101 subjects recorded with functional Magnetic Resonance Imaging while listening to 70 min of short stories. We then fit a linear mapping model to predict brain activity from GPT-2’s activations. Finally, we show that this mapping reliably correlates ([Formula: see text] ) with subjects’ comprehension scores as assessed for each story. This effect peaks in the angular, medial temporal and supra-marginal gyri, and is best accounted for by the long-distance dependencies generated in the deep layers of GPT-2. Overall, this study shows how deep language models help clarify the brain computations underlying language comprehension. Nature Publishing Group UK 2022-09-29 /pmc/articles/PMC9522791/ /pubmed/36175483 http://dx.doi.org/10.1038/s41598-022-20460-9 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Caucheteux, Charlotte Gramfort, Alexandre King, Jean-Rémi Deep language algorithms predict semantic comprehension from brain activity |
title | Deep language algorithms predict semantic comprehension from brain activity |
title_full | Deep language algorithms predict semantic comprehension from brain activity |
title_fullStr | Deep language algorithms predict semantic comprehension from brain activity |
title_full_unstemmed | Deep language algorithms predict semantic comprehension from brain activity |
title_short | Deep language algorithms predict semantic comprehension from brain activity |
title_sort | deep language algorithms predict semantic comprehension from brain activity |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9522791/ https://www.ncbi.nlm.nih.gov/pubmed/36175483 http://dx.doi.org/10.1038/s41598-022-20460-9 |
work_keys_str_mv | AT caucheteuxcharlotte deeplanguagealgorithmspredictsemanticcomprehensionfrombrainactivity AT gramfortalexandre deeplanguagealgorithmspredictsemanticcomprehensionfrombrainactivity AT kingjeanremi deeplanguagealgorithmspredictsemanticcomprehensionfrombrainactivity |