Cargando…
Semantic Representations for NLP Using VerbNet and the Generative Lexicon
The need for deeper semantic processing of human language by our natural language processing systems is evidenced by their still-unreliable performance on inferencing tasks, even using deep learning techniques. These tasks require the detection of subtle interactions between participants in events,...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9048683/ https://www.ncbi.nlm.nih.gov/pubmed/35493615 http://dx.doi.org/10.3389/frai.2022.821697 |
_version_ | 1784695984493690880 |
---|---|
author | Brown, Susan Windisch Bonn, Julia Kazeminejad, Ghazaleh Zaenen, Annie Pustejovsky, James Palmer, Martha |
author_facet | Brown, Susan Windisch Bonn, Julia Kazeminejad, Ghazaleh Zaenen, Annie Pustejovsky, James Palmer, Martha |
author_sort | Brown, Susan Windisch |
collection | PubMed |
description | The need for deeper semantic processing of human language by our natural language processing systems is evidenced by their still-unreliable performance on inferencing tasks, even using deep learning techniques. These tasks require the detection of subtle interactions between participants in events, of sequencing of subevents that are often not explicitly mentioned, and of changes to various participants across an event. Human beings can perform this detection even when sparse lexical items are involved, suggesting that linguistic insights into these abilities could improve NLP performance. In this article, we describe new, hand-crafted semantic representations for the lexical resource VerbNet that draw heavily on the linguistic theories about subevent semantics in the Generative Lexicon (GL). VerbNet defines classes of verbs based on both their semantic and syntactic similarities, paying particular attention to shared diathesis alternations. For each class of verbs, VerbNet provides common semantic roles and typical syntactic patterns. For each syntactic pattern in a class, VerbNet defines a detailed semantic representation that traces the event participants from their initial states, through any changes and into their resulting states. The Generative Lexicon guided the structure of these representations. In GL, event structure has been integrated with dynamic semantic models in order to represent the attribute modified in the course of the event (the location of the moving entity, the extent of a created or destroyed entity, etc.) as a sequence of states related to time points or intervals. We applied that model to VerbNet semantic representations, using a class's semantic roles and a set of predicates defined across classes as components in each subevent. We will describe in detail the structure of these representations, the underlying theory that guides them, and the definition and use of the predicates. We will also evaluate the effectiveness of this resource for NLP by reviewing efforts to use the semantic representations in NLP tasks. |
format | Online Article Text |
id | pubmed-9048683 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-90486832022-04-29 Semantic Representations for NLP Using VerbNet and the Generative Lexicon Brown, Susan Windisch Bonn, Julia Kazeminejad, Ghazaleh Zaenen, Annie Pustejovsky, James Palmer, Martha Front Artif Intell Artificial Intelligence The need for deeper semantic processing of human language by our natural language processing systems is evidenced by their still-unreliable performance on inferencing tasks, even using deep learning techniques. These tasks require the detection of subtle interactions between participants in events, of sequencing of subevents that are often not explicitly mentioned, and of changes to various participants across an event. Human beings can perform this detection even when sparse lexical items are involved, suggesting that linguistic insights into these abilities could improve NLP performance. In this article, we describe new, hand-crafted semantic representations for the lexical resource VerbNet that draw heavily on the linguistic theories about subevent semantics in the Generative Lexicon (GL). VerbNet defines classes of verbs based on both their semantic and syntactic similarities, paying particular attention to shared diathesis alternations. For each class of verbs, VerbNet provides common semantic roles and typical syntactic patterns. For each syntactic pattern in a class, VerbNet defines a detailed semantic representation that traces the event participants from their initial states, through any changes and into their resulting states. The Generative Lexicon guided the structure of these representations. In GL, event structure has been integrated with dynamic semantic models in order to represent the attribute modified in the course of the event (the location of the moving entity, the extent of a created or destroyed entity, etc.) as a sequence of states related to time points or intervals. We applied that model to VerbNet semantic representations, using a class's semantic roles and a set of predicates defined across classes as components in each subevent. We will describe in detail the structure of these representations, the underlying theory that guides them, and the definition and use of the predicates. We will also evaluate the effectiveness of this resource for NLP by reviewing efforts to use the semantic representations in NLP tasks. Frontiers Media S.A. 2022-04-14 /pmc/articles/PMC9048683/ /pubmed/35493615 http://dx.doi.org/10.3389/frai.2022.821697 Text en Copyright © 2022 Brown, Bonn, Kazeminejad, Zaenen, Pustejovsky and Palmer. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Artificial Intelligence Brown, Susan Windisch Bonn, Julia Kazeminejad, Ghazaleh Zaenen, Annie Pustejovsky, James Palmer, Martha Semantic Representations for NLP Using VerbNet and the Generative Lexicon |
title | Semantic Representations for NLP Using VerbNet and the Generative Lexicon |
title_full | Semantic Representations for NLP Using VerbNet and the Generative Lexicon |
title_fullStr | Semantic Representations for NLP Using VerbNet and the Generative Lexicon |
title_full_unstemmed | Semantic Representations for NLP Using VerbNet and the Generative Lexicon |
title_short | Semantic Representations for NLP Using VerbNet and the Generative Lexicon |
title_sort | semantic representations for nlp using verbnet and the generative lexicon |
topic | Artificial Intelligence |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9048683/ https://www.ncbi.nlm.nih.gov/pubmed/35493615 http://dx.doi.org/10.3389/frai.2022.821697 |
work_keys_str_mv | AT brownsusanwindisch semanticrepresentationsfornlpusingverbnetandthegenerativelexicon AT bonnjulia semanticrepresentationsfornlpusingverbnetandthegenerativelexicon AT kazeminejadghazaleh semanticrepresentationsfornlpusingverbnetandthegenerativelexicon AT zaenenannie semanticrepresentationsfornlpusingverbnetandthegenerativelexicon AT pustejovskyjames semanticrepresentationsfornlpusingverbnetandthegenerativelexicon AT palmermartha semanticrepresentationsfornlpusingverbnetandthegenerativelexicon |