Cargando…

Using Neural Networks to Generate Inferential Roles for Natural Language

Neural networks have long been used to study linguistic phenomena spanning the domains of phonology, morphology, syntax, and semantics. Of these domains, semantics is somewhat unique in that there is little clarity concerning what a model needs to be able to do in order to provide an account of how...

Descripción completa

Detalles Bibliográficos
Autores principales: Blouw, Peter, Eliasmith, Chris
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5776445/
https://www.ncbi.nlm.nih.gov/pubmed/29387031
http://dx.doi.org/10.3389/fpsyg.2017.02335
_version_ 1783294083027959808
author Blouw, Peter
Eliasmith, Chris
author_facet Blouw, Peter
Eliasmith, Chris
author_sort Blouw, Peter
collection PubMed
description Neural networks have long been used to study linguistic phenomena spanning the domains of phonology, morphology, syntax, and semantics. Of these domains, semantics is somewhat unique in that there is little clarity concerning what a model needs to be able to do in order to provide an account of how the meanings of complex linguistic expressions, such as sentences, are understood. We argue that one thing such models need to be able to do is generate predictions about which further sentences are likely to follow from a given sentence; these define the sentence's “inferential role.” We then show that it is possible to train a tree-structured neural network model to generate very simple examples of such inferential roles using the recently released Stanford Natural Language Inference (SNLI) dataset. On an empirical front, we evaluate the performance of this model by reporting entailment prediction accuracies on a set of test sentences not present in the training data. We also report the results of a simple study that compares human plausibility ratings for both human-generated and model-generated entailments for a random selection of sentences in this test set. On a more theoretical front, we argue in favor of a revision to some common assumptions about semantics: understanding a linguistic expression is not only a matter of mapping it onto a representation that somehow constitutes its meaning; rather, understanding a linguistic expression is mainly a matter of being able to draw certain inferences. Inference should accordingly be at the core of any model of semantic cognition.
format Online
Article
Text
id pubmed-5776445
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-57764452018-01-31 Using Neural Networks to Generate Inferential Roles for Natural Language Blouw, Peter Eliasmith, Chris Front Psychol Psychology Neural networks have long been used to study linguistic phenomena spanning the domains of phonology, morphology, syntax, and semantics. Of these domains, semantics is somewhat unique in that there is little clarity concerning what a model needs to be able to do in order to provide an account of how the meanings of complex linguistic expressions, such as sentences, are understood. We argue that one thing such models need to be able to do is generate predictions about which further sentences are likely to follow from a given sentence; these define the sentence's “inferential role.” We then show that it is possible to train a tree-structured neural network model to generate very simple examples of such inferential roles using the recently released Stanford Natural Language Inference (SNLI) dataset. On an empirical front, we evaluate the performance of this model by reporting entailment prediction accuracies on a set of test sentences not present in the training data. We also report the results of a simple study that compares human plausibility ratings for both human-generated and model-generated entailments for a random selection of sentences in this test set. On a more theoretical front, we argue in favor of a revision to some common assumptions about semantics: understanding a linguistic expression is not only a matter of mapping it onto a representation that somehow constitutes its meaning; rather, understanding a linguistic expression is mainly a matter of being able to draw certain inferences. Inference should accordingly be at the core of any model of semantic cognition. Frontiers Media S.A. 2018-01-17 /pmc/articles/PMC5776445/ /pubmed/29387031 http://dx.doi.org/10.3389/fpsyg.2017.02335 Text en Copyright © 2018 Blouw and Eliasmith. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Psychology
Blouw, Peter
Eliasmith, Chris
Using Neural Networks to Generate Inferential Roles for Natural Language
title Using Neural Networks to Generate Inferential Roles for Natural Language
title_full Using Neural Networks to Generate Inferential Roles for Natural Language
title_fullStr Using Neural Networks to Generate Inferential Roles for Natural Language
title_full_unstemmed Using Neural Networks to Generate Inferential Roles for Natural Language
title_short Using Neural Networks to Generate Inferential Roles for Natural Language
title_sort using neural networks to generate inferential roles for natural language
topic Psychology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5776445/
https://www.ncbi.nlm.nih.gov/pubmed/29387031
http://dx.doi.org/10.3389/fpsyg.2017.02335
work_keys_str_mv AT blouwpeter usingneuralnetworkstogenerateinferentialrolesfornaturallanguage
AT eliasmithchris usingneuralnetworkstogenerateinferentialrolesfornaturallanguage