Cargando…
The Impact of Pretrained Language Models on Negation and Speculation Detection in Cross-Lingual Medical Text: Comparative Study
BACKGROUND: Negation and speculation are critical elements in natural language processing (NLP)-related tasks, such as information extraction, as these phenomena change the truth value of a proposition. In the clinical narrative that is informal, these linguistic facts are used extensively with the...
Autores principales: | Rivera Zavala, Renzo, Martinez, Paloma |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
JMIR Publications
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7746498/ https://www.ncbi.nlm.nih.gov/pubmed/33270027 http://dx.doi.org/10.2196/18953 |
Ejemplares similares
-
Analyzing transfer learning impact in biomedical cross-lingual named entity recognition and normalization
por: Rivera-Zavala, Renzo M., et al.
Publicado: (2021) -
Pretrained Transformer Language Models Versus Pretrained Word Embeddings for the Detection of Accurate Health Information on Arabic Social Media: Comparative Study
por: Albalawi, Yahya, et al.
Publicado: (2022) -
Visual-Text Reference Pretraining Model for Image Captioning
por: Li, Pengfei, et al.
Publicado: (2022) -
Investigating cross-lingual training for offensive language detection
por: Pelicon, Andraž, et al.
Publicado: (2021) -
An Improved Math Word Problem (MWP) Model Using Unified Pretrained Language Model (UniLM) for Pretraining
por: Zhang, Dongqiu, et al.
Publicado: (2022)