Cargando…
ECAsT: a large dataset for conversational search and an evaluation of metric robustness
The Text REtrieval Conference Conversational assistance track (CAsT) is an annual conversational passage retrieval challenge to create a large-scale open-domain conversational search benchmarking. However, as of yet, the datasets used are small, with just more than 1,000 turns and 100 conversation t...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
PeerJ Inc.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10280565/ https://www.ncbi.nlm.nih.gov/pubmed/37346722 http://dx.doi.org/10.7717/peerj-cs.1328 |
_version_ | 1785060823967727616 |
---|---|
author | Al-Thani, Haya Jansen, Bernard J. Elsayed, Tamer |
author_facet | Al-Thani, Haya Jansen, Bernard J. Elsayed, Tamer |
author_sort | Al-Thani, Haya |
collection | PubMed |
description | The Text REtrieval Conference Conversational assistance track (CAsT) is an annual conversational passage retrieval challenge to create a large-scale open-domain conversational search benchmarking. However, as of yet, the datasets used are small, with just more than 1,000 turns and 100 conversation topics. In the first part of this research, we address the dataset limitation by building a much larger novel multi-turn conversation dataset for conversation search benchmarking called Expanded-CAsT (ECAsT). ECAsT is built using a multi-stage solution that uses a combination of conversational query reformulation and neural paraphrasing and also includes a new model to create multi-turn paraphrases. The meaning and diversity of paraphrases are evaluated with human and automatic evaluation. Using this methodology, we produce and release to the research community a conversational search dataset that is 665% more extensive in terms of size and language diversity than is available at the time of this study, with more than 9,200 turns. The augmented dataset not only provides more data but also more language diversity to improve conversational search neural model training and testing. In the second part of the research, we use ECAsT to assess the robustness of traditional metrics for conversational evaluation used in CAsT and identify its bias toward language diversity. Results show the benefits of adding language diversity for improving the collection of pooled passages and reducing evaluation bias. We found that introducing language diversity via paraphrases returned up to 24% new passages compared to only 2% using CAsT baseline. |
format | Online Article Text |
id | pubmed-10280565 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | PeerJ Inc. |
record_format | MEDLINE/PubMed |
spelling | pubmed-102805652023-06-21 ECAsT: a large dataset for conversational search and an evaluation of metric robustness Al-Thani, Haya Jansen, Bernard J. Elsayed, Tamer PeerJ Comput Sci Artificial Intelligence The Text REtrieval Conference Conversational assistance track (CAsT) is an annual conversational passage retrieval challenge to create a large-scale open-domain conversational search benchmarking. However, as of yet, the datasets used are small, with just more than 1,000 turns and 100 conversation topics. In the first part of this research, we address the dataset limitation by building a much larger novel multi-turn conversation dataset for conversation search benchmarking called Expanded-CAsT (ECAsT). ECAsT is built using a multi-stage solution that uses a combination of conversational query reformulation and neural paraphrasing and also includes a new model to create multi-turn paraphrases. The meaning and diversity of paraphrases are evaluated with human and automatic evaluation. Using this methodology, we produce and release to the research community a conversational search dataset that is 665% more extensive in terms of size and language diversity than is available at the time of this study, with more than 9,200 turns. The augmented dataset not only provides more data but also more language diversity to improve conversational search neural model training and testing. In the second part of the research, we use ECAsT to assess the robustness of traditional metrics for conversational evaluation used in CAsT and identify its bias toward language diversity. Results show the benefits of adding language diversity for improving the collection of pooled passages and reducing evaluation bias. We found that introducing language diversity via paraphrases returned up to 24% new passages compared to only 2% using CAsT baseline. PeerJ Inc. 2023-04-17 /pmc/articles/PMC10280565/ /pubmed/37346722 http://dx.doi.org/10.7717/peerj-cs.1328 Text en ©2023 Al-Thani et al. https://creativecommons.org/licenses/by-nc/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by-nc/4.0/) , which permits using, remixing, and building upon the work non-commercially, as long as it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited. |
spellingShingle | Artificial Intelligence Al-Thani, Haya Jansen, Bernard J. Elsayed, Tamer ECAsT: a large dataset for conversational search and an evaluation of metric robustness |
title | ECAsT: a large dataset for conversational search and an evaluation of metric robustness |
title_full | ECAsT: a large dataset for conversational search and an evaluation of metric robustness |
title_fullStr | ECAsT: a large dataset for conversational search and an evaluation of metric robustness |
title_full_unstemmed | ECAsT: a large dataset for conversational search and an evaluation of metric robustness |
title_short | ECAsT: a large dataset for conversational search and an evaluation of metric robustness |
title_sort | ecast: a large dataset for conversational search and an evaluation of metric robustness |
topic | Artificial Intelligence |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10280565/ https://www.ncbi.nlm.nih.gov/pubmed/37346722 http://dx.doi.org/10.7717/peerj-cs.1328 |
work_keys_str_mv | AT althanihaya ecastalargedatasetforconversationalsearchandanevaluationofmetricrobustness AT jansenbernardj ecastalargedatasetforconversationalsearchandanevaluationofmetricrobustness AT elsayedtamer ecastalargedatasetforconversationalsearchandanevaluationofmetricrobustness |