Cargando…

Inferring cancer disease response from radiology reports using large language models with data augmentation and prompting

OBJECTIVE: To assess large language models on their ability to accurately infer cancer disease response from free-text radiology reports. MATERIALS AND METHODS: We assembled 10 602 computed tomography reports from cancer patients seen at a single institution. All reports were classified into: no evi...

Descripción completa

Detalles Bibliográficos
Autores principales: Tan, Ryan Shea Ying Cong, Lin, Qian, Low, Guat Hwa, Lin, Ruixi, Goh, Tzer Chew, Chang, Christopher Chu En, Lee, Fung Fung, Chan, Wei Yin, Tan, Wei Chong, Tey, Han Jieh, Leong, Fun Loon, Tan, Hong Qi, Nei, Wen Long, Chay, Wen Yee, Tai, David Wai Meng, Lai, Gillianne Geet Yi, Cheng, Lionel Tim-Ee, Wong, Fuh Yong, Chua, Matthew Chin Heng, Chua, Melvin Lee Kiang, Tan, Daniel Shao Weng, Thng, Choon Hua, Tan, Iain Bee Huat, Ng, Hwee Tou
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Oxford University Press 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10531105/
https://www.ncbi.nlm.nih.gov/pubmed/37451682
http://dx.doi.org/10.1093/jamia/ocad133
Descripción
Sumario:OBJECTIVE: To assess large language models on their ability to accurately infer cancer disease response from free-text radiology reports. MATERIALS AND METHODS: We assembled 10 602 computed tomography reports from cancer patients seen at a single institution. All reports were classified into: no evidence of disease, partial response, stable disease, or progressive disease. We applied transformer models, a bidirectional long short-term memory model, a convolutional neural network model, and conventional machine learning methods to this task. Data augmentation using sentence permutation with consistency loss as well as prompt-based fine-tuning were used on the best-performing models. Models were validated on a hold-out test set and an external validation set based on Response Evaluation Criteria in Solid Tumors (RECIST) classifications. RESULTS: The best-performing model was the GatorTron transformer which achieved an accuracy of 0.8916 on the test set and 0.8919 on the RECIST validation set. Data augmentation further improved the accuracy to 0.8976. Prompt-based fine-tuning did not further improve accuracy but was able to reduce the number of training reports to 500 while still achieving good performance. DISCUSSION: These models could be used by researchers to derive progression-free survival in large datasets. It may also serve as a decision support tool by providing clinicians an automated second opinion of disease response. CONCLUSIONS: Large clinical language models demonstrate potential to infer cancer disease response from radiology reports at scale. Data augmentation techniques are useful to further improve performance. Prompt-based fine-tuning can significantly reduce the size of the training dataset.