Cargando…

Symbols and grounding in large language models

Large language models (LLMs) are one of the most impressive achievements of artificial intelligence in recent years. However, their relevance to the study of language more broadly remains unclear. This article considers the potential of LLMs to serve as models of language understanding in humans. Wh...

Descripción completa

Detalles Bibliográficos
Autor principal: Pavlick, Ellie
Formato: Online Artículo Texto
Lenguaje:English
Publicado: The Royal Society 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10239679/
https://www.ncbi.nlm.nih.gov/pubmed/37271171
http://dx.doi.org/10.1098/rsta.2022.0041
_version_ 1785053541721702400
author Pavlick, Ellie
author_facet Pavlick, Ellie
author_sort Pavlick, Ellie
collection PubMed
description Large language models (LLMs) are one of the most impressive achievements of artificial intelligence in recent years. However, their relevance to the study of language more broadly remains unclear. This article considers the potential of LLMs to serve as models of language understanding in humans. While debate on this question typically centres around models’ performance on challenging language understanding tasks, this article argues that the answer depends on models’ underlying competence, and thus that the focus of the debate should be on empirical work which seeks to characterize the representations and processing algorithms that underlie model behaviour. From this perspective, the article offers counterarguments to two commonly cited reasons why LLMs cannot serve as plausible models of language in humans: their lack of symbolic structure and their lack of grounding. For each, a case is made that recent empirical trends undermine the common assumptions about LLMs, and thus that it is premature to draw conclusions about LLMs’ ability (or lack thereof) to offer insights on human language representation and understanding. This article is part of a discussion meeting issue ‘Cognitive artificial intelligence’.
format Online
Article
Text
id pubmed-10239679
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher The Royal Society
record_format MEDLINE/PubMed
spelling pubmed-102396792023-06-05 Symbols and grounding in large language models Pavlick, Ellie Philos Trans A Math Phys Eng Sci Articles Large language models (LLMs) are one of the most impressive achievements of artificial intelligence in recent years. However, their relevance to the study of language more broadly remains unclear. This article considers the potential of LLMs to serve as models of language understanding in humans. While debate on this question typically centres around models’ performance on challenging language understanding tasks, this article argues that the answer depends on models’ underlying competence, and thus that the focus of the debate should be on empirical work which seeks to characterize the representations and processing algorithms that underlie model behaviour. From this perspective, the article offers counterarguments to two commonly cited reasons why LLMs cannot serve as plausible models of language in humans: their lack of symbolic structure and their lack of grounding. For each, a case is made that recent empirical trends undermine the common assumptions about LLMs, and thus that it is premature to draw conclusions about LLMs’ ability (or lack thereof) to offer insights on human language representation and understanding. This article is part of a discussion meeting issue ‘Cognitive artificial intelligence’. The Royal Society 2023-07-24 2023-06-05 /pmc/articles/PMC10239679/ /pubmed/37271171 http://dx.doi.org/10.1098/rsta.2022.0041 Text en © 2023 The Authors. https://creativecommons.org/licenses/by/4.0/Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, provided the original author and source are credited.
spellingShingle Articles
Pavlick, Ellie
Symbols and grounding in large language models
title Symbols and grounding in large language models
title_full Symbols and grounding in large language models
title_fullStr Symbols and grounding in large language models
title_full_unstemmed Symbols and grounding in large language models
title_short Symbols and grounding in large language models
title_sort symbols and grounding in large language models
topic Articles
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10239679/
https://www.ncbi.nlm.nih.gov/pubmed/37271171
http://dx.doi.org/10.1098/rsta.2022.0041
work_keys_str_mv AT pavlickellie symbolsandgroundinginlargelanguagemodels