Cargando…

Diagnosing BERT with Retrieval Heuristics

Word embeddings, made widely popular in 2013 with the release of word2vec, have become a mainstay of NLP engineering pipelines. Recently, with the release of BERT, word embeddings have moved from the term-based embedding space to the contextual embedding space—each term is no longer represented by a...

Descripción completa

Detalles Bibliográficos
Autores principales: Câmara, Arthur, Hauff, Claudia
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7148226/
http://dx.doi.org/10.1007/978-3-030-45439-5_40
_version_ 1783520547955539968
author Câmara, Arthur
Hauff, Claudia
author_facet Câmara, Arthur
Hauff, Claudia
author_sort Câmara, Arthur
collection PubMed
description Word embeddings, made widely popular in 2013 with the release of word2vec, have become a mainstay of NLP engineering pipelines. Recently, with the release of BERT, word embeddings have moved from the term-based embedding space to the contextual embedding space—each term is no longer represented by a single low-dimensional vector but instead each term and its context determine the vector weights. BERT’s setup and architecture have been shown to be general enough to be applicable to many natural language tasks. Importantly for Information Retrieval (IR), in contrast to prior deep learning solutions to IR problems which required significant tuning of neural net architectures and training regimes, “vanilla BERT” has been shown to outperform existing retrieval algorithms by a wide margin, including on tasks and corpora that have long resisted retrieval effectiveness gains over traditional IR baselines (such as Robust04). In this paper, we employ the recently proposed axiomatic dataset analysis technique—that is, we create diagnostic datasets that each fulfil a retrieval heuristic (both term matching and semantic-based)—to explore what BERT is able to learn. In contrast to our expectations, we find BERT, when applied to a recently released large-scale web corpus with ad-hoc topics, to not adhere to any of the explored axioms. At the same time, BERT outperforms the traditional query likelihood retrieval model by 40%. This means that the axiomatic approach to IR (and its extension of diagnostic datasets created for retrieval heuristics) may in its current form not be applicable to large-scale corpora. Additional—different—axioms are needed.
format Online
Article
Text
id pubmed-7148226
institution National Center for Biotechnology Information
language English
publishDate 2020
record_format MEDLINE/PubMed
spelling pubmed-71482262020-04-13 Diagnosing BERT with Retrieval Heuristics Câmara, Arthur Hauff, Claudia Advances in Information Retrieval Article Word embeddings, made widely popular in 2013 with the release of word2vec, have become a mainstay of NLP engineering pipelines. Recently, with the release of BERT, word embeddings have moved from the term-based embedding space to the contextual embedding space—each term is no longer represented by a single low-dimensional vector but instead each term and its context determine the vector weights. BERT’s setup and architecture have been shown to be general enough to be applicable to many natural language tasks. Importantly for Information Retrieval (IR), in contrast to prior deep learning solutions to IR problems which required significant tuning of neural net architectures and training regimes, “vanilla BERT” has been shown to outperform existing retrieval algorithms by a wide margin, including on tasks and corpora that have long resisted retrieval effectiveness gains over traditional IR baselines (such as Robust04). In this paper, we employ the recently proposed axiomatic dataset analysis technique—that is, we create diagnostic datasets that each fulfil a retrieval heuristic (both term matching and semantic-based)—to explore what BERT is able to learn. In contrast to our expectations, we find BERT, when applied to a recently released large-scale web corpus with ad-hoc topics, to not adhere to any of the explored axioms. At the same time, BERT outperforms the traditional query likelihood retrieval model by 40%. This means that the axiomatic approach to IR (and its extension of diagnostic datasets created for retrieval heuristics) may in its current form not be applicable to large-scale corpora. Additional—different—axioms are needed. 2020-03-17 /pmc/articles/PMC7148226/ http://dx.doi.org/10.1007/978-3-030-45439-5_40 Text en © Springer Nature Switzerland AG 2020 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Article
Câmara, Arthur
Hauff, Claudia
Diagnosing BERT with Retrieval Heuristics
title Diagnosing BERT with Retrieval Heuristics
title_full Diagnosing BERT with Retrieval Heuristics
title_fullStr Diagnosing BERT with Retrieval Heuristics
title_full_unstemmed Diagnosing BERT with Retrieval Heuristics
title_short Diagnosing BERT with Retrieval Heuristics
title_sort diagnosing bert with retrieval heuristics
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7148226/
http://dx.doi.org/10.1007/978-3-030-45439-5_40
work_keys_str_mv AT camaraarthur diagnosingbertwithretrievalheuristics
AT hauffclaudia diagnosingbertwithretrievalheuristics