Cargando…

Curriculum Learning Strategies for IR: An Empirical Study on Conversation Response Ranking

Neural ranking models are traditionally trained on a series of random batches, sampled uniformly from the entire training set. Curriculum learning has recently been shown to improve neural models’ effectiveness by sampling batches non-uniformly, going from easy to difficult instances during training...

Descripción completa

Detalles Bibliográficos
Autores principales: Penha, Gustavo, Hauff, Claudia
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7148246/
http://dx.doi.org/10.1007/978-3-030-45439-5_46
_version_ 1783520552605974528
author Penha, Gustavo
Hauff, Claudia
author_facet Penha, Gustavo
Hauff, Claudia
author_sort Penha, Gustavo
collection PubMed
description Neural ranking models are traditionally trained on a series of random batches, sampled uniformly from the entire training set. Curriculum learning has recently been shown to improve neural models’ effectiveness by sampling batches non-uniformly, going from easy to difficult instances during training. In the context of neural Information Retrieval (IR) curriculum learning has not been explored yet, and so it remains unclear (1) how to measure the difficulty of training instances and (2) how to transition from easy to difficult instances during training. To address both challenges and determine whether curriculum learning is beneficial for neural ranking models, we need large-scale datasets and a retrieval task that allows us to conduct a wide range of experiments. For this purpose, we resort to the task of conversation response ranking: ranking responses given the conversation history. In order to deal with challenge (1), we explore scoring functions to measure the difficulty of conversations based on different input spaces. To address challenge (2) we evaluate different pacing functions, which determine the velocity in which we go from easy to difficult instances. We find that, overall, by just intelligently sorting the training data (i.e., by performing curriculum learning) we can improve the retrieval effectiveness by up to 2% (The source code is available at https://github.com/Guzpenha/transformers_cl.).
format Online
Article
Text
id pubmed-7148246
institution National Center for Biotechnology Information
language English
publishDate 2020
record_format MEDLINE/PubMed
spelling pubmed-71482462020-04-13 Curriculum Learning Strategies for IR: An Empirical Study on Conversation Response Ranking Penha, Gustavo Hauff, Claudia Advances in Information Retrieval Article Neural ranking models are traditionally trained on a series of random batches, sampled uniformly from the entire training set. Curriculum learning has recently been shown to improve neural models’ effectiveness by sampling batches non-uniformly, going from easy to difficult instances during training. In the context of neural Information Retrieval (IR) curriculum learning has not been explored yet, and so it remains unclear (1) how to measure the difficulty of training instances and (2) how to transition from easy to difficult instances during training. To address both challenges and determine whether curriculum learning is beneficial for neural ranking models, we need large-scale datasets and a retrieval task that allows us to conduct a wide range of experiments. For this purpose, we resort to the task of conversation response ranking: ranking responses given the conversation history. In order to deal with challenge (1), we explore scoring functions to measure the difficulty of conversations based on different input spaces. To address challenge (2) we evaluate different pacing functions, which determine the velocity in which we go from easy to difficult instances. We find that, overall, by just intelligently sorting the training data (i.e., by performing curriculum learning) we can improve the retrieval effectiveness by up to 2% (The source code is available at https://github.com/Guzpenha/transformers_cl.). 2020-03-17 /pmc/articles/PMC7148246/ http://dx.doi.org/10.1007/978-3-030-45439-5_46 Text en © Springer Nature Switzerland AG 2020 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Article
Penha, Gustavo
Hauff, Claudia
Curriculum Learning Strategies for IR: An Empirical Study on Conversation Response Ranking
title Curriculum Learning Strategies for IR: An Empirical Study on Conversation Response Ranking
title_full Curriculum Learning Strategies for IR: An Empirical Study on Conversation Response Ranking
title_fullStr Curriculum Learning Strategies for IR: An Empirical Study on Conversation Response Ranking
title_full_unstemmed Curriculum Learning Strategies for IR: An Empirical Study on Conversation Response Ranking
title_short Curriculum Learning Strategies for IR: An Empirical Study on Conversation Response Ranking
title_sort curriculum learning strategies for ir: an empirical study on conversation response ranking
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7148246/
http://dx.doi.org/10.1007/978-3-030-45439-5_46
work_keys_str_mv AT penhagustavo curriculumlearningstrategiesforiranempiricalstudyonconversationresponseranking
AT hauffclaudia curriculumlearningstrategiesforiranempiricalstudyonconversationresponseranking