Cargando…
Gaussian hierarchical latent Dirichlet allocation: Bringing polysemy back
Topic models are widely used to discover the latent representation of a set of documents. The two canonical models are latent Dirichlet allocation, and Gaussian latent Dirichlet allocation, where the former uses multinomial distributions over words, and the latter uses multivariate Gaussian distribu...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10337977/ https://www.ncbi.nlm.nih.gov/pubmed/37436968 http://dx.doi.org/10.1371/journal.pone.0288274 |
_version_ | 1785071533843021824 |
---|---|
author | Yoshida, Takahiro Hisano, Ryohei Ohnishi, Takaaki |
author_facet | Yoshida, Takahiro Hisano, Ryohei Ohnishi, Takaaki |
author_sort | Yoshida, Takahiro |
collection | PubMed |
description | Topic models are widely used to discover the latent representation of a set of documents. The two canonical models are latent Dirichlet allocation, and Gaussian latent Dirichlet allocation, where the former uses multinomial distributions over words, and the latter uses multivariate Gaussian distributions over pre-trained word embedding vectors as the latent topic representations, respectively. Compared with latent Dirichlet allocation, Gaussian latent Dirichlet allocation is limited in the sense that it does not capture the polysemy of a word such as “bank.” In this paper, we show that Gaussian latent Dirichlet allocation could recover the ability to capture polysemy by introducing a hierarchical structure in the set of topics that the model can use to represent a given document. Our Gaussian hierarchical latent Dirichlet allocation significantly improves polysemy detection compared with Gaussian-based models and provides more parsimonious topic representations compared with hierarchical latent Dirichlet allocation. Our extensive quantitative experiments show that our model also achieves better topic coherence and held-out document predictive accuracy over a wide range of corpus and word embedding vectors which significantly improves the capture of polysemy compared with GLDA and CGTM. Our model learns the underlying topic distribution and hierarchical structure among topics simultaneously, which can be further used to understand the correlation among topics. Moreover, the added flexibility of our model does not necessarily increase the time complexity compared with GLDA and CGTM, which makes our model a good competitor to GLDA. |
format | Online Article Text |
id | pubmed-10337977 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-103379772023-07-13 Gaussian hierarchical latent Dirichlet allocation: Bringing polysemy back Yoshida, Takahiro Hisano, Ryohei Ohnishi, Takaaki PLoS One Research Article Topic models are widely used to discover the latent representation of a set of documents. The two canonical models are latent Dirichlet allocation, and Gaussian latent Dirichlet allocation, where the former uses multinomial distributions over words, and the latter uses multivariate Gaussian distributions over pre-trained word embedding vectors as the latent topic representations, respectively. Compared with latent Dirichlet allocation, Gaussian latent Dirichlet allocation is limited in the sense that it does not capture the polysemy of a word such as “bank.” In this paper, we show that Gaussian latent Dirichlet allocation could recover the ability to capture polysemy by introducing a hierarchical structure in the set of topics that the model can use to represent a given document. Our Gaussian hierarchical latent Dirichlet allocation significantly improves polysemy detection compared with Gaussian-based models and provides more parsimonious topic representations compared with hierarchical latent Dirichlet allocation. Our extensive quantitative experiments show that our model also achieves better topic coherence and held-out document predictive accuracy over a wide range of corpus and word embedding vectors which significantly improves the capture of polysemy compared with GLDA and CGTM. Our model learns the underlying topic distribution and hierarchical structure among topics simultaneously, which can be further used to understand the correlation among topics. Moreover, the added flexibility of our model does not necessarily increase the time complexity compared with GLDA and CGTM, which makes our model a good competitor to GLDA. Public Library of Science 2023-07-12 /pmc/articles/PMC10337977/ /pubmed/37436968 http://dx.doi.org/10.1371/journal.pone.0288274 Text en © 2023 Yoshida et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Yoshida, Takahiro Hisano, Ryohei Ohnishi, Takaaki Gaussian hierarchical latent Dirichlet allocation: Bringing polysemy back |
title | Gaussian hierarchical latent Dirichlet allocation: Bringing polysemy back |
title_full | Gaussian hierarchical latent Dirichlet allocation: Bringing polysemy back |
title_fullStr | Gaussian hierarchical latent Dirichlet allocation: Bringing polysemy back |
title_full_unstemmed | Gaussian hierarchical latent Dirichlet allocation: Bringing polysemy back |
title_short | Gaussian hierarchical latent Dirichlet allocation: Bringing polysemy back |
title_sort | gaussian hierarchical latent dirichlet allocation: bringing polysemy back |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10337977/ https://www.ncbi.nlm.nih.gov/pubmed/37436968 http://dx.doi.org/10.1371/journal.pone.0288274 |
work_keys_str_mv | AT yoshidatakahiro gaussianhierarchicallatentdirichletallocationbringingpolysemyback AT hisanoryohei gaussianhierarchicallatentdirichletallocationbringingpolysemyback AT ohnishitakaaki gaussianhierarchicallatentdirichletallocationbringingpolysemyback |