Cargando…

Evaluation of a prototype machine learning tool to semi-automate data extraction for systematic literature reviews

BACKGROUND: Evidence-based medicine requires synthesis of research through rigorous and time-intensive systematic literature reviews (SLRs), with significant resource expenditure for data extraction from scientific publications. Machine learning may enable the timely completion of SLRs and reduce er...

Descripción completa

Detalles Bibliográficos
Autores principales: Panayi, Antonia, Ward, Katherine, Benhadji-Schaff, Amir, Ibanez-Lopez, A Santiago, Xia, Andrew, Barzilay, Regina
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10557215/
https://www.ncbi.nlm.nih.gov/pubmed/37803451
http://dx.doi.org/10.1186/s13643-023-02351-w
_version_ 1785117039413690368
author Panayi, Antonia
Ward, Katherine
Benhadji-Schaff, Amir
Ibanez-Lopez, A Santiago
Xia, Andrew
Barzilay, Regina
author_facet Panayi, Antonia
Ward, Katherine
Benhadji-Schaff, Amir
Ibanez-Lopez, A Santiago
Xia, Andrew
Barzilay, Regina
author_sort Panayi, Antonia
collection PubMed
description BACKGROUND: Evidence-based medicine requires synthesis of research through rigorous and time-intensive systematic literature reviews (SLRs), with significant resource expenditure for data extraction from scientific publications. Machine learning may enable the timely completion of SLRs and reduce errors by automating data identification and extraction. METHODS: We evaluated the use of machine learning to extract data from publications related to SLRs in oncology (SLR 1) and Fabry disease (SLR 2). SLR 1 predominantly contained interventional studies and SLR 2 observational studies. Predefined key terms and data were manually annotated to train and test bidirectional encoder representations from transformers (BERT) and bidirectional long-short-term memory machine learning models. Using human annotation as a reference, we assessed the ability of the models to identify biomedical terms of interest (entities) and their relations. We also pretrained BERT on a corpus of 100,000 open access clinical publications and/or enhanced context-dependent entity classification with a conditional random field (CRF) model. Performance was measured using the F(1) score, a metric that combines precision and recall. We defined successful matches as partial overlap of entities of the same type. RESULTS: For entity recognition, the pretrained BERT+CRF model had the best performance, with an F(1) score of 73% in SLR 1 and 70% in SLR 2. Entity types identified with the highest accuracy were metrics for progression-free survival (SLR 1, F(1) score 88%) or for patient age (SLR 2, F(1) score 82%). Treatment arm dosage was identified less successfully (F(1) scores 60% [SLR 1] and 49% [SLR 2]). The best-performing model for relation extraction, pretrained BERT relation classification, exhibited F(1) scores higher than 90% in cases with at least 80 relation examples for a pair of related entity types. CONCLUSIONS: The performance of BERT is enhanced by pretraining with biomedical literature and by combining with a CRF model. With refinement, machine learning may assist with manual data extraction for SLRs. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s13643-023-02351-w.
format Online
Article
Text
id pubmed-10557215
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-105572152023-10-07 Evaluation of a prototype machine learning tool to semi-automate data extraction for systematic literature reviews Panayi, Antonia Ward, Katherine Benhadji-Schaff, Amir Ibanez-Lopez, A Santiago Xia, Andrew Barzilay, Regina Syst Rev Methodology BACKGROUND: Evidence-based medicine requires synthesis of research through rigorous and time-intensive systematic literature reviews (SLRs), with significant resource expenditure for data extraction from scientific publications. Machine learning may enable the timely completion of SLRs and reduce errors by automating data identification and extraction. METHODS: We evaluated the use of machine learning to extract data from publications related to SLRs in oncology (SLR 1) and Fabry disease (SLR 2). SLR 1 predominantly contained interventional studies and SLR 2 observational studies. Predefined key terms and data were manually annotated to train and test bidirectional encoder representations from transformers (BERT) and bidirectional long-short-term memory machine learning models. Using human annotation as a reference, we assessed the ability of the models to identify biomedical terms of interest (entities) and their relations. We also pretrained BERT on a corpus of 100,000 open access clinical publications and/or enhanced context-dependent entity classification with a conditional random field (CRF) model. Performance was measured using the F(1) score, a metric that combines precision and recall. We defined successful matches as partial overlap of entities of the same type. RESULTS: For entity recognition, the pretrained BERT+CRF model had the best performance, with an F(1) score of 73% in SLR 1 and 70% in SLR 2. Entity types identified with the highest accuracy were metrics for progression-free survival (SLR 1, F(1) score 88%) or for patient age (SLR 2, F(1) score 82%). Treatment arm dosage was identified less successfully (F(1) scores 60% [SLR 1] and 49% [SLR 2]). The best-performing model for relation extraction, pretrained BERT relation classification, exhibited F(1) scores higher than 90% in cases with at least 80 relation examples for a pair of related entity types. CONCLUSIONS: The performance of BERT is enhanced by pretraining with biomedical literature and by combining with a CRF model. With refinement, machine learning may assist with manual data extraction for SLRs. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s13643-023-02351-w. BioMed Central 2023-10-06 /pmc/articles/PMC10557215/ /pubmed/37803451 http://dx.doi.org/10.1186/s13643-023-02351-w Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Methodology
Panayi, Antonia
Ward, Katherine
Benhadji-Schaff, Amir
Ibanez-Lopez, A Santiago
Xia, Andrew
Barzilay, Regina
Evaluation of a prototype machine learning tool to semi-automate data extraction for systematic literature reviews
title Evaluation of a prototype machine learning tool to semi-automate data extraction for systematic literature reviews
title_full Evaluation of a prototype machine learning tool to semi-automate data extraction for systematic literature reviews
title_fullStr Evaluation of a prototype machine learning tool to semi-automate data extraction for systematic literature reviews
title_full_unstemmed Evaluation of a prototype machine learning tool to semi-automate data extraction for systematic literature reviews
title_short Evaluation of a prototype machine learning tool to semi-automate data extraction for systematic literature reviews
title_sort evaluation of a prototype machine learning tool to semi-automate data extraction for systematic literature reviews
topic Methodology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10557215/
https://www.ncbi.nlm.nih.gov/pubmed/37803451
http://dx.doi.org/10.1186/s13643-023-02351-w
work_keys_str_mv AT panayiantonia evaluationofaprototypemachinelearningtooltosemiautomatedataextractionforsystematicliteraturereviews
AT wardkatherine evaluationofaprototypemachinelearningtooltosemiautomatedataextractionforsystematicliteraturereviews
AT benhadjischaffamir evaluationofaprototypemachinelearningtooltosemiautomatedataextractionforsystematicliteraturereviews
AT ibanezlopezasantiago evaluationofaprototypemachinelearningtooltosemiautomatedataextractionforsystematicliteraturereviews
AT xiaandrew evaluationofaprototypemachinelearningtooltosemiautomatedataextractionforsystematicliteraturereviews
AT barzilayregina evaluationofaprototypemachinelearningtooltosemiautomatedataextractionforsystematicliteraturereviews