Cargando…

Neural response generation for task completion using conversational knowledge graph

Effective dialogue generation for task completion is challenging to build. The task requires the response generation system to generate the responses consistent with intent and slot values, have diversity in response and be able to handle multiple domains. The response also needs to be context relev...

Descripción completa

Detalles Bibliográficos
Autores principales: Ahmad, Zishan, Ekbal, Asif, Sengupta, Shubhashis, Bhattacharyya, Pushpak
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9910720/
https://www.ncbi.nlm.nih.gov/pubmed/36758020
http://dx.doi.org/10.1371/journal.pone.0269856
_version_ 1784884844690407424
author Ahmad, Zishan
Ekbal, Asif
Sengupta, Shubhashis
Bhattacharyya, Pushpak
author_facet Ahmad, Zishan
Ekbal, Asif
Sengupta, Shubhashis
Bhattacharyya, Pushpak
author_sort Ahmad, Zishan
collection PubMed
description Effective dialogue generation for task completion is challenging to build. The task requires the response generation system to generate the responses consistent with intent and slot values, have diversity in response and be able to handle multiple domains. The response also needs to be context relevant with respect to the previous utterances in the conversation. In this paper, we build six different models containing Bi-directional Long Short Term Memory (Bi-LSTM) and Bidirectional Encoder Representations from Transformers (BERT) based encoders. To effectively generate the correct slot values, we implement a copy mechanism at the decoder side. To capture the conversation context and the current state of the conversation we introduce a simple heuristic to build a conversational knowledge graph. Using this novel algorithm we are able to capture important aspects in a conversation. This conversational knowledge-graph is then used by our response generation model to generate more relevant and consistent responses. Using this knowledge-graph we do not need the entire utterance history, rather only the last utterance to capture the conversational context. We conduct experiments showing the effectiveness of the knowledge-graph in capturing the context and generating good response. We compare these results against hierarchical-encoder-decoder models and show that the use of triples from the conversational knowledge-graph is an effective method to capture context and the user requirement. Using this knowledge-graph we show an average performance gain of 0.75 BLEU score across different models. Similar results also hold true across different manual evaluation metrics.
format Online
Article
Text
id pubmed-9910720
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-99107202023-02-10 Neural response generation for task completion using conversational knowledge graph Ahmad, Zishan Ekbal, Asif Sengupta, Shubhashis Bhattacharyya, Pushpak PLoS One Research Article Effective dialogue generation for task completion is challenging to build. The task requires the response generation system to generate the responses consistent with intent and slot values, have diversity in response and be able to handle multiple domains. The response also needs to be context relevant with respect to the previous utterances in the conversation. In this paper, we build six different models containing Bi-directional Long Short Term Memory (Bi-LSTM) and Bidirectional Encoder Representations from Transformers (BERT) based encoders. To effectively generate the correct slot values, we implement a copy mechanism at the decoder side. To capture the conversation context and the current state of the conversation we introduce a simple heuristic to build a conversational knowledge graph. Using this novel algorithm we are able to capture important aspects in a conversation. This conversational knowledge-graph is then used by our response generation model to generate more relevant and consistent responses. Using this knowledge-graph we do not need the entire utterance history, rather only the last utterance to capture the conversational context. We conduct experiments showing the effectiveness of the knowledge-graph in capturing the context and generating good response. We compare these results against hierarchical-encoder-decoder models and show that the use of triples from the conversational knowledge-graph is an effective method to capture context and the user requirement. Using this knowledge-graph we show an average performance gain of 0.75 BLEU score across different models. Similar results also hold true across different manual evaluation metrics. Public Library of Science 2023-02-09 /pmc/articles/PMC9910720/ /pubmed/36758020 http://dx.doi.org/10.1371/journal.pone.0269856 Text en © 2023 Ahmad et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Ahmad, Zishan
Ekbal, Asif
Sengupta, Shubhashis
Bhattacharyya, Pushpak
Neural response generation for task completion using conversational knowledge graph
title Neural response generation for task completion using conversational knowledge graph
title_full Neural response generation for task completion using conversational knowledge graph
title_fullStr Neural response generation for task completion using conversational knowledge graph
title_full_unstemmed Neural response generation for task completion using conversational knowledge graph
title_short Neural response generation for task completion using conversational knowledge graph
title_sort neural response generation for task completion using conversational knowledge graph
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9910720/
https://www.ncbi.nlm.nih.gov/pubmed/36758020
http://dx.doi.org/10.1371/journal.pone.0269856
work_keys_str_mv AT ahmadzishan neuralresponsegenerationfortaskcompletionusingconversationalknowledgegraph
AT ekbalasif neuralresponsegenerationfortaskcompletionusingconversationalknowledgegraph
AT senguptashubhashis neuralresponsegenerationfortaskcompletionusingconversationalknowledgegraph
AT bhattacharyyapushpak neuralresponsegenerationfortaskcompletionusingconversationalknowledgegraph