Cargando…

Improving Multi-turn Response Selection Models with Complementary Last-Utterance Selection by Instance Weighting

Open-domain retrieval-based dialogue systems require a considerable amount of training data to learn their parameters. However, in practice, the negative samples of training data are usually selected from an unannotated conversation data set at random. The generated training data is likely to contai...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhou, Kun, Zhao, Wayne Xin, Zhu, Yutao, Wen, Ji-Rong, Yu, Jingsong
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7206249/
http://dx.doi.org/10.1007/978-3-030-47436-2_36
_version_ 1783530377550233600
author Zhou, Kun
Zhao, Wayne Xin
Zhu, Yutao
Wen, Ji-Rong
Yu, Jingsong
author_facet Zhou, Kun
Zhao, Wayne Xin
Zhu, Yutao
Wen, Ji-Rong
Yu, Jingsong
author_sort Zhou, Kun
collection PubMed
description Open-domain retrieval-based dialogue systems require a considerable amount of training data to learn their parameters. However, in practice, the negative samples of training data are usually selected from an unannotated conversation data set at random. The generated training data is likely to contain noise and affect the performance of the response selection models. To address this difficulty, we consider utilizing the underlying correlation in the data resource itself to derive different kinds of supervision signals and reduce the influence of noisy data. More specially, we consider a main-complementary task pair. The main task (i.e., our focus) selects the correct response given the last utterance and context, and the complementary task selects the last utterance given the response and context. The key point is that the output of the complementary task is used to set instance weights for the main task. We conduct extensive experiments in two public datasets and obtain significant improvement in both datasets. We also investigate the variant of our approach in multiple aspects, and the results have verified the effectiveness of our approach.
format Online
Article
Text
id pubmed-7206249
institution National Center for Biotechnology Information
language English
publishDate 2020
record_format MEDLINE/PubMed
spelling pubmed-72062492020-05-08 Improving Multi-turn Response Selection Models with Complementary Last-Utterance Selection by Instance Weighting Zhou, Kun Zhao, Wayne Xin Zhu, Yutao Wen, Ji-Rong Yu, Jingsong Advances in Knowledge Discovery and Data Mining Article Open-domain retrieval-based dialogue systems require a considerable amount of training data to learn their parameters. However, in practice, the negative samples of training data are usually selected from an unannotated conversation data set at random. The generated training data is likely to contain noise and affect the performance of the response selection models. To address this difficulty, we consider utilizing the underlying correlation in the data resource itself to derive different kinds of supervision signals and reduce the influence of noisy data. More specially, we consider a main-complementary task pair. The main task (i.e., our focus) selects the correct response given the last utterance and context, and the complementary task selects the last utterance given the response and context. The key point is that the output of the complementary task is used to set instance weights for the main task. We conduct extensive experiments in two public datasets and obtain significant improvement in both datasets. We also investigate the variant of our approach in multiple aspects, and the results have verified the effectiveness of our approach. 2020-04-17 /pmc/articles/PMC7206249/ http://dx.doi.org/10.1007/978-3-030-47436-2_36 Text en © Springer Nature Switzerland AG 2020 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Article
Zhou, Kun
Zhao, Wayne Xin
Zhu, Yutao
Wen, Ji-Rong
Yu, Jingsong
Improving Multi-turn Response Selection Models with Complementary Last-Utterance Selection by Instance Weighting
title Improving Multi-turn Response Selection Models with Complementary Last-Utterance Selection by Instance Weighting
title_full Improving Multi-turn Response Selection Models with Complementary Last-Utterance Selection by Instance Weighting
title_fullStr Improving Multi-turn Response Selection Models with Complementary Last-Utterance Selection by Instance Weighting
title_full_unstemmed Improving Multi-turn Response Selection Models with Complementary Last-Utterance Selection by Instance Weighting
title_short Improving Multi-turn Response Selection Models with Complementary Last-Utterance Selection by Instance Weighting
title_sort improving multi-turn response selection models with complementary last-utterance selection by instance weighting
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7206249/
http://dx.doi.org/10.1007/978-3-030-47436-2_36
work_keys_str_mv AT zhoukun improvingmultiturnresponseselectionmodelswithcomplementarylastutteranceselectionbyinstanceweighting
AT zhaowaynexin improvingmultiturnresponseselectionmodelswithcomplementarylastutteranceselectionbyinstanceweighting
AT zhuyutao improvingmultiturnresponseselectionmodelswithcomplementarylastutteranceselectionbyinstanceweighting
AT wenjirong improvingmultiturnresponseselectionmodelswithcomplementarylastutteranceselectionbyinstanceweighting
AT yujingsong improvingmultiturnresponseselectionmodelswithcomplementarylastutteranceselectionbyinstanceweighting