Cargando…
Epidemiologic information discovery from open-access COVID-19 case reports via pretrained language model
Although open-access data are increasingly common and useful to epidemiological research, the curation of such datasets is resource-intensive and time-consuming. Despite the existence of a major source of COVID-19 data, the regularly disclosed case reports were often written in natural language with...
Autores principales: | Wang, Zhizheng, Liu, Xiao Fan, Du, Zhanwei, Wang, Lin, Wu, Ye, Holme, Petter, Lachmann, Michael, Lin, Hongfei, Wong, Zoie S.Y., Xu, Xiao-Ke, Sun, Yuanyuan |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Elsevier
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9441477/ https://www.ncbi.nlm.nih.gov/pubmed/36093379 http://dx.doi.org/10.1016/j.isci.2022.105079 |
Ejemplares similares
-
Protocol for the automatic extraction of epidemiological information via a pre-trained language model
por: Wang, Zhizheng, et al.
Publicado: (2023) -
An Improved Math Word Problem (MWP) Model Using Unified Pretrained Language Model (UniLM) for Pretraining
por: Zhang, Dongqiu, et al.
Publicado: (2022) -
Sequence-to-sequence pretraining for a less-resourced Slovenian language
por: Ulčar, Matej, et al.
Publicado: (2023) -
Optimizing sentinel surveillance in temporal network epidemiology
por: Bai, Yuan, et al.
Publicado: (2017) -
To pretrain or not? A systematic analysis of the benefits of pretraining in diabetic retinopathy
por: Srinivasan, Vignesh, et al.
Publicado: (2022)