Cargando…

Extracting seizure frequency from epilepsy clinic notes: a machine reading approach to natural language processing

OBJECTIVE: Seizure frequency and seizure freedom are among the most important outcome measures for patients with epilepsy. In this study, we aimed to automatically extract this clinical information from unstructured text in clinical notes. If successful, this could improve clinical decision-making i...

Descripción completa

Detalles Bibliográficos
Autores principales: Xie, Kevin, Gallagher, Ryan S, Conrad, Erin C, Garrick, Chadric O, Baldassano, Steven N, Bernabei, John M, Galer, Peter D, Ghosn, Nina J, Greenblatt, Adam S, Jennings, Tara, Kornspun, Alana, Kulick-Soper, Catherine V, Panchal, Jal M, Pattnaik, Akash R, Scheid, Brittany H, Wei, Danmeng, Weitzman, Micah, Muthukrishnan, Ramya, Kim, Joongwon, Litt, Brian, Ellis, Colin A, Roth, Dan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Oxford University Press 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9006692/
https://www.ncbi.nlm.nih.gov/pubmed/35190834
http://dx.doi.org/10.1093/jamia/ocac018
Descripción
Sumario:OBJECTIVE: Seizure frequency and seizure freedom are among the most important outcome measures for patients with epilepsy. In this study, we aimed to automatically extract this clinical information from unstructured text in clinical notes. If successful, this could improve clinical decision-making in epilepsy patients and allow for rapid, large-scale retrospective research. MATERIALS AND METHODS: We developed a finetuning pipeline for pretrained neural models to classify patients as being seizure-free and to extract text containing their seizure frequency and date of last seizure from clinical notes. We annotated 1000 notes for use as training and testing data and determined how well 3 pretrained neural models, BERT, RoBERTa, and Bio_ClinicalBERT, could identify and extract the desired information after finetuning. RESULTS: The finetuned models (BERT(FT), Bio_ClinicalBERT(FT), and RoBERTa(FT)) achieved near-human performance when classifying patients as seizure free, with BERT(FT) and Bio_ClinicalBERT(FT) achieving accuracy scores over 80%. All 3 models also achieved human performance when extracting seizure frequency and date of last seizure, with overall F(1) scores over 0.80. The best combination of models was Bio_ClinicalBERT(FT) for classification, and RoBERTa(FT) for text extraction. Most of the gains in performance due to finetuning required roughly 70 annotated notes. DISCUSSION AND CONCLUSION: Our novel machine reading approach to extracting important clinical outcomes performed at or near human performance on several tasks. This approach opens new possibilities to support clinical practice and conduct large-scale retrospective clinical research. Future studies can use our finetuning pipeline with minimal training annotations to answer new clinical questions.