Cargando…

Empirical evaluation of language modeling to ascertain cancer outcomes from clinical text reports

BACKGROUND: Longitudinal data on key cancer outcomes for clinical research, such as response to treatment and disease progression, are not captured in standard cancer registry reporting. Manual extraction of such outcomes from unstructured electronic health records is a slow, resource-intensive proc...

Descripción completa

Detalles Bibliográficos
Autores principales: Elmarakeby, Haitham A., Trukhanov, Pavel S., Arroyo, Vidal M., Riaz, Irbaz Bin, Schrag, Deborah, Van Allen, Eliezer M., Kehl, Kenneth L.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10474750/
https://www.ncbi.nlm.nih.gov/pubmed/37658330
http://dx.doi.org/10.1186/s12859-023-05439-1
_version_ 1785100569993543680
author Elmarakeby, Haitham A.
Trukhanov, Pavel S.
Arroyo, Vidal M.
Riaz, Irbaz Bin
Schrag, Deborah
Van Allen, Eliezer M.
Kehl, Kenneth L.
author_facet Elmarakeby, Haitham A.
Trukhanov, Pavel S.
Arroyo, Vidal M.
Riaz, Irbaz Bin
Schrag, Deborah
Van Allen, Eliezer M.
Kehl, Kenneth L.
author_sort Elmarakeby, Haitham A.
collection PubMed
description BACKGROUND: Longitudinal data on key cancer outcomes for clinical research, such as response to treatment and disease progression, are not captured in standard cancer registry reporting. Manual extraction of such outcomes from unstructured electronic health records is a slow, resource-intensive process. Natural language processing (NLP) methods can accelerate outcome annotation, but they require substantial labeled data. Transfer learning based on language modeling, particularly using the Transformer architecture, has achieved improvements in NLP performance. However, there has been no systematic evaluation of NLP model training strategies on the extraction of cancer outcomes from unstructured text. RESULTS: We evaluated the performance of nine NLP models at the two tasks of identifying cancer response and cancer progression within imaging reports at a single academic center among patients with non-small cell lung cancer. We trained the classification models under different conditions, including training sample size, classification architecture, and language model pre-training. The training involved a labeled dataset of 14,218 imaging reports for 1112 patients with lung cancer. A subset of models was based on a pre-trained language model, DFCI-ImagingBERT, created by further pre-training a BERT-based model using an unlabeled dataset of 662,579 reports from 27,483 patients with cancer from our center. A classifier based on our DFCI-ImagingBERT, trained on more than 200 patients, achieved the best results in most experiments; however, these results were marginally better than simpler “bag of words” or convolutional neural network models. CONCLUSION: When developing AI models to extract outcomes from imaging reports for clinical cancer research, if computational resources are plentiful but labeled training data are limited, large language models can be used for zero- or few-shot learning to achieve reasonable performance. When computational resources are more limited but labeled training data are readily available, even simple machine learning architectures can achieve good performance for such tasks. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12859-023-05439-1.
format Online
Article
Text
id pubmed-10474750
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-104747502023-09-03 Empirical evaluation of language modeling to ascertain cancer outcomes from clinical text reports Elmarakeby, Haitham A. Trukhanov, Pavel S. Arroyo, Vidal M. Riaz, Irbaz Bin Schrag, Deborah Van Allen, Eliezer M. Kehl, Kenneth L. BMC Bioinformatics Research BACKGROUND: Longitudinal data on key cancer outcomes for clinical research, such as response to treatment and disease progression, are not captured in standard cancer registry reporting. Manual extraction of such outcomes from unstructured electronic health records is a slow, resource-intensive process. Natural language processing (NLP) methods can accelerate outcome annotation, but they require substantial labeled data. Transfer learning based on language modeling, particularly using the Transformer architecture, has achieved improvements in NLP performance. However, there has been no systematic evaluation of NLP model training strategies on the extraction of cancer outcomes from unstructured text. RESULTS: We evaluated the performance of nine NLP models at the two tasks of identifying cancer response and cancer progression within imaging reports at a single academic center among patients with non-small cell lung cancer. We trained the classification models under different conditions, including training sample size, classification architecture, and language model pre-training. The training involved a labeled dataset of 14,218 imaging reports for 1112 patients with lung cancer. A subset of models was based on a pre-trained language model, DFCI-ImagingBERT, created by further pre-training a BERT-based model using an unlabeled dataset of 662,579 reports from 27,483 patients with cancer from our center. A classifier based on our DFCI-ImagingBERT, trained on more than 200 patients, achieved the best results in most experiments; however, these results were marginally better than simpler “bag of words” or convolutional neural network models. CONCLUSION: When developing AI models to extract outcomes from imaging reports for clinical cancer research, if computational resources are plentiful but labeled training data are limited, large language models can be used for zero- or few-shot learning to achieve reasonable performance. When computational resources are more limited but labeled training data are readily available, even simple machine learning architectures can achieve good performance for such tasks. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12859-023-05439-1. BioMed Central 2023-09-02 /pmc/articles/PMC10474750/ /pubmed/37658330 http://dx.doi.org/10.1186/s12859-023-05439-1 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Research
Elmarakeby, Haitham A.
Trukhanov, Pavel S.
Arroyo, Vidal M.
Riaz, Irbaz Bin
Schrag, Deborah
Van Allen, Eliezer M.
Kehl, Kenneth L.
Empirical evaluation of language modeling to ascertain cancer outcomes from clinical text reports
title Empirical evaluation of language modeling to ascertain cancer outcomes from clinical text reports
title_full Empirical evaluation of language modeling to ascertain cancer outcomes from clinical text reports
title_fullStr Empirical evaluation of language modeling to ascertain cancer outcomes from clinical text reports
title_full_unstemmed Empirical evaluation of language modeling to ascertain cancer outcomes from clinical text reports
title_short Empirical evaluation of language modeling to ascertain cancer outcomes from clinical text reports
title_sort empirical evaluation of language modeling to ascertain cancer outcomes from clinical text reports
topic Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10474750/
https://www.ncbi.nlm.nih.gov/pubmed/37658330
http://dx.doi.org/10.1186/s12859-023-05439-1
work_keys_str_mv AT elmarakebyhaithama empiricalevaluationoflanguagemodelingtoascertaincanceroutcomesfromclinicaltextreports
AT trukhanovpavels empiricalevaluationoflanguagemodelingtoascertaincanceroutcomesfromclinicaltextreports
AT arroyovidalm empiricalevaluationoflanguagemodelingtoascertaincanceroutcomesfromclinicaltextreports
AT riazirbazbin empiricalevaluationoflanguagemodelingtoascertaincanceroutcomesfromclinicaltextreports
AT schragdeborah empiricalevaluationoflanguagemodelingtoascertaincanceroutcomesfromclinicaltextreports
AT vanalleneliezerm empiricalevaluationoflanguagemodelingtoascertaincanceroutcomesfromclinicaltextreports
AT kehlkennethl empiricalevaluationoflanguagemodelingtoascertaincanceroutcomesfromclinicaltextreports