Cargando…
Distributional Learning of Appearance
Opportunities for associationist learning of word meaning, where a word is heard or read contemperaneously with information being available on its meaning, are considered too infrequent to account for the rate of language acquisition in children. It has been suggested that additional learning could...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2013
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3584031/ https://www.ncbi.nlm.nih.gov/pubmed/23460927 http://dx.doi.org/10.1371/journal.pone.0058074 |
_version_ | 1782475517094526976 |
---|---|
author | Griffin, Lewis D. Wahab, M. Husni Newell, Andrew J. |
author_facet | Griffin, Lewis D. Wahab, M. Husni Newell, Andrew J. |
author_sort | Griffin, Lewis D. |
collection | PubMed |
description | Opportunities for associationist learning of word meaning, where a word is heard or read contemperaneously with information being available on its meaning, are considered too infrequent to account for the rate of language acquisition in children. It has been suggested that additional learning could occur in a distributional mode, where information is gleaned from the distributional statistics (word co-occurrence etc.) of natural language. Such statistics are relevant to meaning because of the Distributional Principle that ‘words of similar meaning tend to occur in similar contexts’. Computational systems, such as Latent Semantic Analysis, have substantiated the viability of distributional learning of word meaning, by showing that semantic similarities between words can be accurately estimated from analysis of the distributional statistics of a natural language corpus. We consider whether appearance similarities can also be learnt in a distributional mode. As grounds for such a mode we advance the Appearance Hypothesis that ‘words with referents of similar appearance tend to occur in similar contexts’. We assess the viability of such learning by looking at the performance of a computer system that interpolates, on the basis of distributional and appearance similarity, from words that it has been explicitly taught the appearance of, in order to identify and name objects that it has not been taught about. Our experiment tests with a set of 660 simple concrete noun words. Appearance information on words is modelled using sets of images of examples of the word. Distributional similarity is computed from a standard natural language corpus. Our computation results support the viability of distributional learning of appearance. |
format | Online Article Text |
id | pubmed-3584031 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2013 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-35840312013-03-04 Distributional Learning of Appearance Griffin, Lewis D. Wahab, M. Husni Newell, Andrew J. PLoS One Research Article Opportunities for associationist learning of word meaning, where a word is heard or read contemperaneously with information being available on its meaning, are considered too infrequent to account for the rate of language acquisition in children. It has been suggested that additional learning could occur in a distributional mode, where information is gleaned from the distributional statistics (word co-occurrence etc.) of natural language. Such statistics are relevant to meaning because of the Distributional Principle that ‘words of similar meaning tend to occur in similar contexts’. Computational systems, such as Latent Semantic Analysis, have substantiated the viability of distributional learning of word meaning, by showing that semantic similarities between words can be accurately estimated from analysis of the distributional statistics of a natural language corpus. We consider whether appearance similarities can also be learnt in a distributional mode. As grounds for such a mode we advance the Appearance Hypothesis that ‘words with referents of similar appearance tend to occur in similar contexts’. We assess the viability of such learning by looking at the performance of a computer system that interpolates, on the basis of distributional and appearance similarity, from words that it has been explicitly taught the appearance of, in order to identify and name objects that it has not been taught about. Our experiment tests with a set of 660 simple concrete noun words. Appearance information on words is modelled using sets of images of examples of the word. Distributional similarity is computed from a standard natural language corpus. Our computation results support the viability of distributional learning of appearance. Public Library of Science 2013-02-27 /pmc/articles/PMC3584031/ /pubmed/23460927 http://dx.doi.org/10.1371/journal.pone.0058074 Text en © 2013 Griffin et al http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited. |
spellingShingle | Research Article Griffin, Lewis D. Wahab, M. Husni Newell, Andrew J. Distributional Learning of Appearance |
title | Distributional Learning of Appearance |
title_full | Distributional Learning of Appearance |
title_fullStr | Distributional Learning of Appearance |
title_full_unstemmed | Distributional Learning of Appearance |
title_short | Distributional Learning of Appearance |
title_sort | distributional learning of appearance |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3584031/ https://www.ncbi.nlm.nih.gov/pubmed/23460927 http://dx.doi.org/10.1371/journal.pone.0058074 |
work_keys_str_mv | AT griffinlewisd distributionallearningofappearance AT wahabmhusni distributionallearningofappearance AT newellandrewj distributionallearningofappearance |