Cargando…

Human–Machine Multi-Turn Language Dialogue Interaction Based on Deep Learning

During multi-turn dialogue, with the increase in dialogue turns, the difficulty of intention recognition and the generation of the following sentence reply become more and more difficult. This paper mainly optimizes the context information extraction ability of the Seq2Seq Encoder in multi-turn dial...

Descripción completa

Detalles Bibliográficos
Autores principales: Ke, Xianxin, Hu, Ping, Yang, Chenghao, Zhang, Renbao
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8955483/
https://www.ncbi.nlm.nih.gov/pubmed/35334647
http://dx.doi.org/10.3390/mi13030355
_version_ 1784676346950057984
author Ke, Xianxin
Hu, Ping
Yang, Chenghao
Zhang, Renbao
author_facet Ke, Xianxin
Hu, Ping
Yang, Chenghao
Zhang, Renbao
author_sort Ke, Xianxin
collection PubMed
description During multi-turn dialogue, with the increase in dialogue turns, the difficulty of intention recognition and the generation of the following sentence reply become more and more difficult. This paper mainly optimizes the context information extraction ability of the Seq2Seq Encoder in multi-turn dialogue modeling. We fuse the historical dialogue information and the current input statement information in the encoder to capture the context dialogue information better. Therefore, we propose a BERT-based fusion encoder ProBERT-To-GUR (PBTG) and an enhanced ELMO model 3-ELMO-Attention-GRU (3EAG). The two models mainly enhance the contextual information extraction capability of multi-turn dialogue. To verify the effectiveness of the two proposed models, we demonstrate the effectiveness of our model by combining data based on the LCCC-large multi-turn dialogue dataset and the Naturalconv multi-turn dataset. The experimental comparison results show that, in the multi-turn dialogue experiments of the open domain and fixed topic, the two Seq2Seq coding models proposed are significantly improved compared with the current state-of-the-art models. For specified topic multi-turn dialogue, the 3EAG model has the average BLEU value reaches the optimal 32.4, which achieves the best language generation effect, and the BLEU value in the actual dialogue verification experiment also surpasses 31.8. for open-domain multi-turn dialogue. The average BLEU value of the PBTG model reaches 31.8, the optimal 31.8 achieves the best language generation effect, and the BLEU value in the actual dialogue verification experiment surpasses 31.2. So, the 3EAG model is more suitable for fixed-topic multi-turn dialogues for the two tasks. The PBTG model is more muscular in open-domain multi-turn dialogue tasks; therefore, our model is significant for promoting multi-turn dialogue research.
format Online
Article
Text
id pubmed-8955483
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-89554832022-03-26 Human–Machine Multi-Turn Language Dialogue Interaction Based on Deep Learning Ke, Xianxin Hu, Ping Yang, Chenghao Zhang, Renbao Micromachines (Basel) Article During multi-turn dialogue, with the increase in dialogue turns, the difficulty of intention recognition and the generation of the following sentence reply become more and more difficult. This paper mainly optimizes the context information extraction ability of the Seq2Seq Encoder in multi-turn dialogue modeling. We fuse the historical dialogue information and the current input statement information in the encoder to capture the context dialogue information better. Therefore, we propose a BERT-based fusion encoder ProBERT-To-GUR (PBTG) and an enhanced ELMO model 3-ELMO-Attention-GRU (3EAG). The two models mainly enhance the contextual information extraction capability of multi-turn dialogue. To verify the effectiveness of the two proposed models, we demonstrate the effectiveness of our model by combining data based on the LCCC-large multi-turn dialogue dataset and the Naturalconv multi-turn dataset. The experimental comparison results show that, in the multi-turn dialogue experiments of the open domain and fixed topic, the two Seq2Seq coding models proposed are significantly improved compared with the current state-of-the-art models. For specified topic multi-turn dialogue, the 3EAG model has the average BLEU value reaches the optimal 32.4, which achieves the best language generation effect, and the BLEU value in the actual dialogue verification experiment also surpasses 31.8. for open-domain multi-turn dialogue. The average BLEU value of the PBTG model reaches 31.8, the optimal 31.8 achieves the best language generation effect, and the BLEU value in the actual dialogue verification experiment surpasses 31.2. So, the 3EAG model is more suitable for fixed-topic multi-turn dialogues for the two tasks. The PBTG model is more muscular in open-domain multi-turn dialogue tasks; therefore, our model is significant for promoting multi-turn dialogue research. MDPI 2022-02-23 /pmc/articles/PMC8955483/ /pubmed/35334647 http://dx.doi.org/10.3390/mi13030355 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Ke, Xianxin
Hu, Ping
Yang, Chenghao
Zhang, Renbao
Human–Machine Multi-Turn Language Dialogue Interaction Based on Deep Learning
title Human–Machine Multi-Turn Language Dialogue Interaction Based on Deep Learning
title_full Human–Machine Multi-Turn Language Dialogue Interaction Based on Deep Learning
title_fullStr Human–Machine Multi-Turn Language Dialogue Interaction Based on Deep Learning
title_full_unstemmed Human–Machine Multi-Turn Language Dialogue Interaction Based on Deep Learning
title_short Human–Machine Multi-Turn Language Dialogue Interaction Based on Deep Learning
title_sort human–machine multi-turn language dialogue interaction based on deep learning
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8955483/
https://www.ncbi.nlm.nih.gov/pubmed/35334647
http://dx.doi.org/10.3390/mi13030355
work_keys_str_mv AT kexianxin humanmachinemultiturnlanguagedialogueinteractionbasedondeeplearning
AT huping humanmachinemultiturnlanguagedialogueinteractionbasedondeeplearning
AT yangchenghao humanmachinemultiturnlanguagedialogueinteractionbasedondeeplearning
AT zhangrenbao humanmachinemultiturnlanguagedialogueinteractionbasedondeeplearning