Cargando…
Document Network Projection in Pretrained Word Embedding Space
We present Regularized Linear Embedding (RLE), a novel method that projects a collection of linked documents (e.g., citation network) into a pretrained word embedding space. In addition to the textual content, we leverage a matrix of pairwise similarities providing complementary information (e.g., t...
Autores principales: | Gourru, Antoine, Guille, Adrien, Velcin, Julien, Jacques, Julien |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7148102/ http://dx.doi.org/10.1007/978-3-030-45442-5_19 |
Ejemplares similares
-
Inductive Document Network Embedding with Topic-Word Attention
por: Brochier, Robin, et al.
Publicado: (2020) -
Pretrained Transformer Language Models Versus Pretrained Word Embeddings for the Detection of Accurate Health Information on Arabic Social Media: Comparative Study
por: Albalawi, Yahya, et al.
Publicado: (2022) -
An Improved Math Word Problem (MWP) Model Using Unified Pretrained Language Model (UniLM) for Pretraining
por: Zhang, Dongqiu, et al.
Publicado: (2022) -
Bayesian neural network with pretrained protein embedding enhances prediction accuracy of drug-protein interaction
por: Kim, QHwan, et al.
Publicado: (2021) -
To pretrain or not? A systematic analysis of the benefits of pretraining in diabetic retinopathy
por: Srinivasan, Vignesh, et al.
Publicado: (2022)