Cargando…

Natural Language Processing and Machine Learning Methods to Characterize Unstructured Patient-Reported Outcomes: Validation Study

BACKGROUND: Assessing patient-reported outcomes (PROs) through interviews or conversations during clinical encounters provides insightful information about survivorship. OBJECTIVE: This study aims to test the validity of natural language processing (NLP) and machine learning (ML) algorithms in ident...

Descripción completa

Detalles Bibliográficos
Autores principales: Lu, Zhaohua, Sim, Jin-ah, Wang, Jade X, Forrest, Christopher B, Krull, Kevin R, Srivastava, Deokumar, Hudson, Melissa M, Robison, Leslie L, Baker, Justin N, Huang, I-Chan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: JMIR Publications 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8600437/
https://www.ncbi.nlm.nih.gov/pubmed/34730546
http://dx.doi.org/10.2196/26777
_version_ 1784601157233016832
author Lu, Zhaohua
Sim, Jin-ah
Wang, Jade X
Forrest, Christopher B
Krull, Kevin R
Srivastava, Deokumar
Hudson, Melissa M
Robison, Leslie L
Baker, Justin N
Huang, I-Chan
author_facet Lu, Zhaohua
Sim, Jin-ah
Wang, Jade X
Forrest, Christopher B
Krull, Kevin R
Srivastava, Deokumar
Hudson, Melissa M
Robison, Leslie L
Baker, Justin N
Huang, I-Chan
author_sort Lu, Zhaohua
collection PubMed
description BACKGROUND: Assessing patient-reported outcomes (PROs) through interviews or conversations during clinical encounters provides insightful information about survivorship. OBJECTIVE: This study aims to test the validity of natural language processing (NLP) and machine learning (ML) algorithms in identifying different attributes of pain interference and fatigue symptoms experienced by child and adolescent survivors of cancer versus the judgment by PRO content experts as the gold standard to validate NLP/ML algorithms. METHODS: This cross-sectional study focused on child and adolescent survivors of cancer, aged 8 to 17 years, and caregivers, from whom 391 meaning units in the pain interference domain and 423 in the fatigue domain were generated for analyses. Data were collected from the After Completion of Therapy Clinic at St. Jude Children’s Research Hospital. Experienced pain interference and fatigue symptoms were reported through in-depth interviews. After verbatim transcription, analyzable sentences (ie, meaning units) were semantically labeled by 2 content experts for each attribute (physical, cognitive, social, or unclassified). Two NLP/ML methods were used to extract and validate the semantic features: bidirectional encoder representations from transformers (BERT) and Word2vec plus one of the ML methods, the support vector machine or extreme gradient boosting. Receiver operating characteristic and precision-recall curves were used to evaluate the accuracy and validity of the NLP/ML methods. RESULTS: Compared with Word2vec/support vector machine and Word2vec/extreme gradient boosting, BERT demonstrated higher accuracy in both symptom domains, with 0.931 (95% CI 0.905-0.957) and 0.916 (95% CI 0.887-0.941) for problems with cognitive and social attributes on pain interference, respectively, and 0.929 (95% CI 0.903-0.953) and 0.917 (95% CI 0.891-0.943) for problems with cognitive and social attributes on fatigue, respectively. In addition, BERT yielded superior areas under the receiver operating characteristic curve for cognitive attributes on pain interference and fatigue domains (0.923, 95% CI 0.879-0.997; 0.948, 95% CI 0.922-0.979) and superior areas under the precision-recall curve for cognitive attributes on pain interference and fatigue domains (0.818, 95% CI 0.735-0.917; 0.855, 95% CI 0.791-0.930). CONCLUSIONS: The BERT method performed better than the other methods. As an alternative to using standard PRO surveys, collecting unstructured PROs via interviews or conversations during clinical encounters and applying NLP/ML methods can facilitate PRO assessment in child and adolescent cancer survivors.
format Online
Article
Text
id pubmed-8600437
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher JMIR Publications
record_format MEDLINE/PubMed
spelling pubmed-86004372021-12-07 Natural Language Processing and Machine Learning Methods to Characterize Unstructured Patient-Reported Outcomes: Validation Study Lu, Zhaohua Sim, Jin-ah Wang, Jade X Forrest, Christopher B Krull, Kevin R Srivastava, Deokumar Hudson, Melissa M Robison, Leslie L Baker, Justin N Huang, I-Chan J Med Internet Res Original Paper BACKGROUND: Assessing patient-reported outcomes (PROs) through interviews or conversations during clinical encounters provides insightful information about survivorship. OBJECTIVE: This study aims to test the validity of natural language processing (NLP) and machine learning (ML) algorithms in identifying different attributes of pain interference and fatigue symptoms experienced by child and adolescent survivors of cancer versus the judgment by PRO content experts as the gold standard to validate NLP/ML algorithms. METHODS: This cross-sectional study focused on child and adolescent survivors of cancer, aged 8 to 17 years, and caregivers, from whom 391 meaning units in the pain interference domain and 423 in the fatigue domain were generated for analyses. Data were collected from the After Completion of Therapy Clinic at St. Jude Children’s Research Hospital. Experienced pain interference and fatigue symptoms were reported through in-depth interviews. After verbatim transcription, analyzable sentences (ie, meaning units) were semantically labeled by 2 content experts for each attribute (physical, cognitive, social, or unclassified). Two NLP/ML methods were used to extract and validate the semantic features: bidirectional encoder representations from transformers (BERT) and Word2vec plus one of the ML methods, the support vector machine or extreme gradient boosting. Receiver operating characteristic and precision-recall curves were used to evaluate the accuracy and validity of the NLP/ML methods. RESULTS: Compared with Word2vec/support vector machine and Word2vec/extreme gradient boosting, BERT demonstrated higher accuracy in both symptom domains, with 0.931 (95% CI 0.905-0.957) and 0.916 (95% CI 0.887-0.941) for problems with cognitive and social attributes on pain interference, respectively, and 0.929 (95% CI 0.903-0.953) and 0.917 (95% CI 0.891-0.943) for problems with cognitive and social attributes on fatigue, respectively. In addition, BERT yielded superior areas under the receiver operating characteristic curve for cognitive attributes on pain interference and fatigue domains (0.923, 95% CI 0.879-0.997; 0.948, 95% CI 0.922-0.979) and superior areas under the precision-recall curve for cognitive attributes on pain interference and fatigue domains (0.818, 95% CI 0.735-0.917; 0.855, 95% CI 0.791-0.930). CONCLUSIONS: The BERT method performed better than the other methods. As an alternative to using standard PRO surveys, collecting unstructured PROs via interviews or conversations during clinical encounters and applying NLP/ML methods can facilitate PRO assessment in child and adolescent cancer survivors. JMIR Publications 2021-11-03 /pmc/articles/PMC8600437/ /pubmed/34730546 http://dx.doi.org/10.2196/26777 Text en ©Zhaohua Lu, Jin-ah Sim, Jade X Wang, Christopher B Forrest, Kevin R Krull, Deokumar Srivastava, Melissa M Hudson, Leslie L Robison, Justin N Baker, I-Chan Huang. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 03.11.2021. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.
spellingShingle Original Paper
Lu, Zhaohua
Sim, Jin-ah
Wang, Jade X
Forrest, Christopher B
Krull, Kevin R
Srivastava, Deokumar
Hudson, Melissa M
Robison, Leslie L
Baker, Justin N
Huang, I-Chan
Natural Language Processing and Machine Learning Methods to Characterize Unstructured Patient-Reported Outcomes: Validation Study
title Natural Language Processing and Machine Learning Methods to Characterize Unstructured Patient-Reported Outcomes: Validation Study
title_full Natural Language Processing and Machine Learning Methods to Characterize Unstructured Patient-Reported Outcomes: Validation Study
title_fullStr Natural Language Processing and Machine Learning Methods to Characterize Unstructured Patient-Reported Outcomes: Validation Study
title_full_unstemmed Natural Language Processing and Machine Learning Methods to Characterize Unstructured Patient-Reported Outcomes: Validation Study
title_short Natural Language Processing and Machine Learning Methods to Characterize Unstructured Patient-Reported Outcomes: Validation Study
title_sort natural language processing and machine learning methods to characterize unstructured patient-reported outcomes: validation study
topic Original Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8600437/
https://www.ncbi.nlm.nih.gov/pubmed/34730546
http://dx.doi.org/10.2196/26777
work_keys_str_mv AT luzhaohua naturallanguageprocessingandmachinelearningmethodstocharacterizeunstructuredpatientreportedoutcomesvalidationstudy
AT simjinah naturallanguageprocessingandmachinelearningmethodstocharacterizeunstructuredpatientreportedoutcomesvalidationstudy
AT wangjadex naturallanguageprocessingandmachinelearningmethodstocharacterizeunstructuredpatientreportedoutcomesvalidationstudy
AT forrestchristopherb naturallanguageprocessingandmachinelearningmethodstocharacterizeunstructuredpatientreportedoutcomesvalidationstudy
AT krullkevinr naturallanguageprocessingandmachinelearningmethodstocharacterizeunstructuredpatientreportedoutcomesvalidationstudy
AT srivastavadeokumar naturallanguageprocessingandmachinelearningmethodstocharacterizeunstructuredpatientreportedoutcomesvalidationstudy
AT hudsonmelissam naturallanguageprocessingandmachinelearningmethodstocharacterizeunstructuredpatientreportedoutcomesvalidationstudy
AT robisonlesliel naturallanguageprocessingandmachinelearningmethodstocharacterizeunstructuredpatientreportedoutcomesvalidationstudy
AT bakerjustinn naturallanguageprocessingandmachinelearningmethodstocharacterizeunstructuredpatientreportedoutcomesvalidationstudy
AT huangichan naturallanguageprocessingandmachinelearningmethodstocharacterizeunstructuredpatientreportedoutcomesvalidationstudy