Cargando…
Jointly learning word embeddings using a corpus and a knowledge base
Methods for representing the meaning of words in vector spaces purely using the information distributed in text corpora have proved to be very valuable in various text mining and natural language processing (NLP) tasks. However, these methods still disregard the valuable semantic relational structur...
Autores principales: | Alsuhaibani, Mohammed, Bollegala, Danushka, Maehara, Takanori, Kawarabayashi, Ken-ichi |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5847320/ https://www.ncbi.nlm.nih.gov/pubmed/29529052 http://dx.doi.org/10.1371/journal.pone.0193094 |
Ejemplares similares
-
Fine-Tuning Word Embeddings for Hierarchical Representation of Data Using a Corpus and a Knowledge Base for Various Machine Learning Applications
por: Alsuhaibani, Mohammed, et al.
Publicado: (2021) -
Learning linear transformations between counting-based and prediction-based word embeddings
por: Bollegala, Danushka, et al.
Publicado: (2017) -
An iterative approach for the global estimation of sentence similarity
por: Kajiwara, Tomoyuki, et al.
Publicado: (2017) -
Joint Word and Entity Embeddings for Entity Retrieval from a Knowledge Graph
por: Nikolaev, Fedor, et al.
Publicado: (2020) -
Improving word embeddings in Portuguese: increasing accuracy while reducing the size of the corpus
por: Pinto, José Pedro, et al.
Publicado: (2022)