Cargando…

What Homophones Say about Words

The number of potential meanings for a new word is astronomic. To make the word-learning problem tractable, one must restrict the hypothesis space. To do so, current word learning accounts often incorporate constraints about cognition or about the mature lexicon directly in the learning device. We a...

Descripción completa

Detalles Bibliográficos
Autores principales: Dautriche, Isabelle, Chemla, Emmanuel
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2016
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5008697/
https://www.ncbi.nlm.nih.gov/pubmed/27583384
http://dx.doi.org/10.1371/journal.pone.0162176
_version_ 1782451420386033664
author Dautriche, Isabelle
Chemla, Emmanuel
author_facet Dautriche, Isabelle
Chemla, Emmanuel
author_sort Dautriche, Isabelle
collection PubMed
description The number of potential meanings for a new word is astronomic. To make the word-learning problem tractable, one must restrict the hypothesis space. To do so, current word learning accounts often incorporate constraints about cognition or about the mature lexicon directly in the learning device. We are concerned with the convexity constraint, which holds that concepts (privileged sets of entities that we think of as “coherent”) do not have gaps (if A and B belong to a concept, so does any entity “between” A and B). To leverage from it a linguistic constraint, learning algorithms have percolated this constraint from concepts, to word forms: some algorithms rely on the possibility that word forms are associated with convex sets of objects. Yet this does have to be the case: homophones are word forms associated with two separate words and meanings. Two sets of experiments show that when evidence suggests that a novel label is associated with a disjoint (non-convex) set of objects, either a) because there is a gap in conceptual space between the learning exemplars for a given word or b) because of the intervention of other lexical items in that gap, adults prefer to postulate homophony, where a single word form is associated with two separate words and meanings, rather than inferring that the word could have a disjunctive, discontinuous meaning. These results about homophony must be integrated to current word learning algorithms. We conclude by arguing for a weaker specialization of word learning algorithms, which too often could miss important constraints by focusing on a restricted empirical basis (e.g., non-homophonous content words).
format Online
Article
Text
id pubmed-5008697
institution National Center for Biotechnology Information
language English
publishDate 2016
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-50086972016-09-27 What Homophones Say about Words Dautriche, Isabelle Chemla, Emmanuel PLoS One Research Article The number of potential meanings for a new word is astronomic. To make the word-learning problem tractable, one must restrict the hypothesis space. To do so, current word learning accounts often incorporate constraints about cognition or about the mature lexicon directly in the learning device. We are concerned with the convexity constraint, which holds that concepts (privileged sets of entities that we think of as “coherent”) do not have gaps (if A and B belong to a concept, so does any entity “between” A and B). To leverage from it a linguistic constraint, learning algorithms have percolated this constraint from concepts, to word forms: some algorithms rely on the possibility that word forms are associated with convex sets of objects. Yet this does have to be the case: homophones are word forms associated with two separate words and meanings. Two sets of experiments show that when evidence suggests that a novel label is associated with a disjoint (non-convex) set of objects, either a) because there is a gap in conceptual space between the learning exemplars for a given word or b) because of the intervention of other lexical items in that gap, adults prefer to postulate homophony, where a single word form is associated with two separate words and meanings, rather than inferring that the word could have a disjunctive, discontinuous meaning. These results about homophony must be integrated to current word learning algorithms. We conclude by arguing for a weaker specialization of word learning algorithms, which too often could miss important constraints by focusing on a restricted empirical basis (e.g., non-homophonous content words). Public Library of Science 2016-09-01 /pmc/articles/PMC5008697/ /pubmed/27583384 http://dx.doi.org/10.1371/journal.pone.0162176 Text en © 2016 Dautriche, Chemla http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Dautriche, Isabelle
Chemla, Emmanuel
What Homophones Say about Words
title What Homophones Say about Words
title_full What Homophones Say about Words
title_fullStr What Homophones Say about Words
title_full_unstemmed What Homophones Say about Words
title_short What Homophones Say about Words
title_sort what homophones say about words
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5008697/
https://www.ncbi.nlm.nih.gov/pubmed/27583384
http://dx.doi.org/10.1371/journal.pone.0162176
work_keys_str_mv AT dautricheisabelle whathomophonessayaboutwords
AT chemlaemmanuel whathomophonessayaboutwords