Cargando…
Multiturn dialogue generation by modeling sentence-level and discourse-level contexts
Currently, multiturn dialogue models generate human-like responses based on pretrained language models given a dialogue history. However, most existing models simply concatenate dialogue histories, which makes it difficult to maintain a high degree of consistency throughout the generated text. We sp...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9701771/ https://www.ncbi.nlm.nih.gov/pubmed/36437277 http://dx.doi.org/10.1038/s41598-022-24787-1 |
Sumario: | Currently, multiturn dialogue models generate human-like responses based on pretrained language models given a dialogue history. However, most existing models simply concatenate dialogue histories, which makes it difficult to maintain a high degree of consistency throughout the generated text. We speculate that this is because the encoder ignores information about the hierarchical structure between sentences. In this paper, we propose a novel multiturn dialogue generation model that captures contextual information at the sentence level and at the discourse level during the encoding process. The context semantic information is dynamically modeled through a difference-aware module. A sentence order prediction training task is also designed to learn representation by reconstructing the order of disrupted sentences with a learning-to-rank algorithm. Experiments on the multiturn dialogue dataset, DailyDialog, demonstrate that our model substantially outperforms the baseline model in terms of both automatic and human evaluation metrics, generating more fluent and informative responses than the baseline model. |
---|