Cargando…
Semi-automating abstract screening with a natural language model pretrained on biomedical literature
We demonstrate the performance and workload impact of incorporating a natural language model, pretrained on citations of biomedical literature, on a workflow of abstract screening for studies on prognostic factors in end-stage lung disease. The model was optimized on one-third of the abstracts, and...
Autores principales: | Ng, Sheryl Hui-Xian, Teow, Kiok Liang, Ang, Gary Yee, Tan, Woan Shin, Hum, Allyn |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10517490/ https://www.ncbi.nlm.nih.gov/pubmed/37740227 http://dx.doi.org/10.1186/s13643-023-02353-8 |
Ejemplares similares
-
Estimating Transmission Parameters for COVID-19 Clusters by Using Symptom Onset Data, Singapore, January–April 2020
por: Ng, Sheryl Hui-Xian, et al.
Publicado: (2021) -
An Improved Math Word Problem (MWP) Model Using Unified Pretrained Language Model (UniLM) for Pretraining
por: Zhang, Dongqiu, et al.
Publicado: (2022) -
Sequence-to-sequence pretraining for a less-resourced Slovenian language
por: Ulčar, Matej, et al.
Publicado: (2023) -
To pretrain or not? A systematic analysis of the benefits of pretraining in diabetic retinopathy
por: Srinivasan, Vignesh, et al.
Publicado: (2022) -
Predicting mortality in patients diagnosed with advanced dementia presenting at an acute care hospital: the PROgnostic Model for Advanced DEmentia (PRO-MADE)
por: Kaur, Palvinder, et al.
Publicado: (2023)