Cargando…

Monotone Quantifiers Emerge via Iterated Learning

Natural languages exhibit many semantic universals, that is, properties of meaning shared across all languages. In this paper, we develop an explanation of one very prominent semantic universal, the monotonicity universal. While the existing work has shown that quantifiers satisfying the monotonicit...

Descripción completa

Detalles Bibliográficos
Autores principales: Carcassi, Fausto, Steinert‐Threlkeld, Shane, Szymanik, Jakub
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley and Sons Inc. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8459284/
https://www.ncbi.nlm.nih.gov/pubmed/34379338
http://dx.doi.org/10.1111/cogs.13027
_version_ 1784571491100131328
author Carcassi, Fausto
Steinert‐Threlkeld, Shane
Szymanik, Jakub
author_facet Carcassi, Fausto
Steinert‐Threlkeld, Shane
Szymanik, Jakub
author_sort Carcassi, Fausto
collection PubMed
description Natural languages exhibit many semantic universals, that is, properties of meaning shared across all languages. In this paper, we develop an explanation of one very prominent semantic universal, the monotonicity universal. While the existing work has shown that quantifiers satisfying the monotonicity universal are easier to learn, we provide a more complete explanation by considering the emergence of quantifiers from the perspective of cultural evolution. In particular, we show that quantifiers satisfy the monotonicity universal evolve reliably in an iterated learning paradigm with neural networks as agents.
format Online
Article
Text
id pubmed-8459284
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher John Wiley and Sons Inc.
record_format MEDLINE/PubMed
spelling pubmed-84592842021-09-28 Monotone Quantifiers Emerge via Iterated Learning Carcassi, Fausto Steinert‐Threlkeld, Shane Szymanik, Jakub Cogn Sci Regular Articles Natural languages exhibit many semantic universals, that is, properties of meaning shared across all languages. In this paper, we develop an explanation of one very prominent semantic universal, the monotonicity universal. While the existing work has shown that quantifiers satisfying the monotonicity universal are easier to learn, we provide a more complete explanation by considering the emergence of quantifiers from the perspective of cultural evolution. In particular, we show that quantifiers satisfy the monotonicity universal evolve reliably in an iterated learning paradigm with neural networks as agents. John Wiley and Sons Inc. 2021-08-11 2021-08 /pmc/articles/PMC8459284/ /pubmed/34379338 http://dx.doi.org/10.1111/cogs.13027 Text en © 2021 The Authors. Cognitive Science published by Wiley Periodicals LLC on behalf of Cognitive Science Society (CSS). https://creativecommons.org/licenses/by/4.0/This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
spellingShingle Regular Articles
Carcassi, Fausto
Steinert‐Threlkeld, Shane
Szymanik, Jakub
Monotone Quantifiers Emerge via Iterated Learning
title Monotone Quantifiers Emerge via Iterated Learning
title_full Monotone Quantifiers Emerge via Iterated Learning
title_fullStr Monotone Quantifiers Emerge via Iterated Learning
title_full_unstemmed Monotone Quantifiers Emerge via Iterated Learning
title_short Monotone Quantifiers Emerge via Iterated Learning
title_sort monotone quantifiers emerge via iterated learning
topic Regular Articles
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8459284/
https://www.ncbi.nlm.nih.gov/pubmed/34379338
http://dx.doi.org/10.1111/cogs.13027
work_keys_str_mv AT carcassifausto monotonequantifiersemergeviaiteratedlearning
AT steinertthrelkeldshane monotonequantifiersemergeviaiteratedlearning
AT szymanikjakub monotonequantifiersemergeviaiteratedlearning