Cargando…

Adapting and evaluating a deep learning language model for clinical why-question answering

OBJECTIVES: To adapt and evaluate a deep learning language model for answering why-questions based on patient-specific clinical text. MATERIALS AND METHODS: Bidirectional encoder representations from transformers (BERT) models were trained with varying data sources to perform SQuAD 2.0 style why-que...

Descripción completa

Detalles Bibliográficos
Autores principales: Wen, Andrew, Elwazir, Mohamed Y, Moon, Sungrim, Fan, Jungwei
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Oxford University Press 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7309262/
https://www.ncbi.nlm.nih.gov/pubmed/32607483
http://dx.doi.org/10.1093/jamiaopen/ooz072
Descripción
Sumario:OBJECTIVES: To adapt and evaluate a deep learning language model for answering why-questions based on patient-specific clinical text. MATERIALS AND METHODS: Bidirectional encoder representations from transformers (BERT) models were trained with varying data sources to perform SQuAD 2.0 style why-question answering (why-QA) on clinical notes. The evaluation focused on: (1) comparing the merits from different training data and (2) error analysis. RESULTS: The best model achieved an accuracy of 0.707 (or 0.760 by partial match). Training toward customization for the clinical language helped increase 6% in accuracy. DISCUSSION: The error analysis suggested that the model did not really perform deep reasoning and that clinical why-QA might warrant more sophisticated solutions. CONCLUSION: The BERT model achieved moderate accuracy in clinical why-QA and should benefit from the rapidly evolving technology. Despite the identified limitations, it could serve as a competent proxy for question-driven clinical information extraction.