Cargando…

One model for the learning of language

A major goal of linguistics and cognitive science is to understand what class of learning systems can acquire natural language. Until recently, the computational requirements of language have been used to argue that learning is impossible without a highly constrained hypothesis space. Here, we descr...

Descripción completa

Detalles Bibliográficos
Autores principales: Yang, Yuan, Piantadosi, Steven T.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: National Academy of Sciences 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8812683/
https://www.ncbi.nlm.nih.gov/pubmed/35074868
http://dx.doi.org/10.1073/pnas.2021865119
_version_ 1784644706491170816
author Yang, Yuan
Piantadosi, Steven T.
author_facet Yang, Yuan
Piantadosi, Steven T.
author_sort Yang, Yuan
collection PubMed
description A major goal of linguistics and cognitive science is to understand what class of learning systems can acquire natural language. Until recently, the computational requirements of language have been used to argue that learning is impossible without a highly constrained hypothesis space. Here, we describe a learning system that is maximally unconstrained, operating over the space of all computations, and is able to acquire many of the key structures present in natural language from positive evidence alone. We demonstrate this by providing the same learning model with data from 74 distinct formal languages which have been argued to capture key features of language, have been studied in experimental work, or come from an interesting complexity class. The model is able to successfully induce the latent system generating the observed strings from small amounts of evidence in almost all cases, including for regular (e.g., a(n), [Formula: see text] , and [Formula: see text]), context-free (e.g., [Formula: see text] , and [Formula: see text]), and context-sensitive (e.g., [Formula: see text] , and xx) languages, as well as for many languages studied in learning experiments. These results show that relatively small amounts of positive evidence can support learning of rich classes of generative computations over structures. The model provides an idealized learning setup upon which additional cognitive constraints and biases can be formalized.
format Online
Article
Text
id pubmed-8812683
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher National Academy of Sciences
record_format MEDLINE/PubMed
spelling pubmed-88126832022-02-16 One model for the learning of language Yang, Yuan Piantadosi, Steven T. Proc Natl Acad Sci U S A Social Sciences A major goal of linguistics and cognitive science is to understand what class of learning systems can acquire natural language. Until recently, the computational requirements of language have been used to argue that learning is impossible without a highly constrained hypothesis space. Here, we describe a learning system that is maximally unconstrained, operating over the space of all computations, and is able to acquire many of the key structures present in natural language from positive evidence alone. We demonstrate this by providing the same learning model with data from 74 distinct formal languages which have been argued to capture key features of language, have been studied in experimental work, or come from an interesting complexity class. The model is able to successfully induce the latent system generating the observed strings from small amounts of evidence in almost all cases, including for regular (e.g., a(n), [Formula: see text] , and [Formula: see text]), context-free (e.g., [Formula: see text] , and [Formula: see text]), and context-sensitive (e.g., [Formula: see text] , and xx) languages, as well as for many languages studied in learning experiments. These results show that relatively small amounts of positive evidence can support learning of rich classes of generative computations over structures. The model provides an idealized learning setup upon which additional cognitive constraints and biases can be formalized. National Academy of Sciences 2022-01-24 2022-02-01 /pmc/articles/PMC8812683/ /pubmed/35074868 http://dx.doi.org/10.1073/pnas.2021865119 Text en Copyright © 2022 the Author(s). Published by PNAS. https://creativecommons.org/licenses/by/4.0/This open access article is distributed under Creative Commons Attribution License 4.0 (CC BY) (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Social Sciences
Yang, Yuan
Piantadosi, Steven T.
One model for the learning of language
title One model for the learning of language
title_full One model for the learning of language
title_fullStr One model for the learning of language
title_full_unstemmed One model for the learning of language
title_short One model for the learning of language
title_sort one model for the learning of language
topic Social Sciences
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8812683/
https://www.ncbi.nlm.nih.gov/pubmed/35074868
http://dx.doi.org/10.1073/pnas.2021865119
work_keys_str_mv AT yangyuan onemodelforthelearningoflanguage
AT piantadosistevent onemodelforthelearningoflanguage