Cargando…

Clinical Text Summarization: Adapting Large Language Models Can Outperform Human Experts

Sifting through vast textual data and summarizing key information from electronic health records (EHR) imposes a substantial burden on how clinicians allocate their time. Although large language models (LLMs) have shown immense promise in natural language processing (NLP) tasks, their efficacy on a...

Descripción completa

Detalles Bibliográficos
Autores principales: Van Veen, Dave, Van Uden, Cara, Blankemeier, Louis, Delbrouck, Jean-Benoit, Aali, Asad, Bluethgen, Christian, Pareek, Anuj, Polacin, Malgorzata, Reis, Eduardo Pontes, Seehofnerová, Anna, Rohatgi, Nidhi, Hosamani, Poonam, Collins, William, Ahuja, Neera, Langlotz, Curtis P., Hom, Jason, Gatidis, Sergios, Pauly, John, Chaudhari, Akshay S.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: American Journal Experts 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10635391/
https://www.ncbi.nlm.nih.gov/pubmed/37961377
http://dx.doi.org/10.21203/rs.3.rs-3483777/v1
_version_ 1785146339551608832
author Van Veen, Dave
Van Uden, Cara
Blankemeier, Louis
Delbrouck, Jean-Benoit
Aali, Asad
Bluethgen, Christian
Pareek, Anuj
Polacin, Malgorzata
Reis, Eduardo Pontes
Seehofnerová, Anna
Rohatgi, Nidhi
Hosamani, Poonam
Collins, William
Ahuja, Neera
Langlotz, Curtis P.
Hom, Jason
Gatidis, Sergios
Pauly, John
Chaudhari, Akshay S.
author_facet Van Veen, Dave
Van Uden, Cara
Blankemeier, Louis
Delbrouck, Jean-Benoit
Aali, Asad
Bluethgen, Christian
Pareek, Anuj
Polacin, Malgorzata
Reis, Eduardo Pontes
Seehofnerová, Anna
Rohatgi, Nidhi
Hosamani, Poonam
Collins, William
Ahuja, Neera
Langlotz, Curtis P.
Hom, Jason
Gatidis, Sergios
Pauly, John
Chaudhari, Akshay S.
author_sort Van Veen, Dave
collection PubMed
description Sifting through vast textual data and summarizing key information from electronic health records (EHR) imposes a substantial burden on how clinicians allocate their time. Although large language models (LLMs) have shown immense promise in natural language processing (NLP) tasks, their efficacy on a diverse range of clinical summarization tasks has not yet been rigorously demonstrated. In this work, we apply domain adaptation methods to eight LLMs, spanning six datasets and four distinct clinical summarization tasks: radiology reports, patient questions, progress notes, and doctor-patient dialogue. Our thorough quantitative assessment reveals trade-offs between models and adaptation methods in addition to instances where recent advances in LLMs may not improve results. Further, in a clinical reader study with ten physicians, we show that summaries from our best-adapted LLMs are preferable to human summaries in terms of completeness and correctness. Our ensuing qualitative analysis highlights challenges faced by both LLMs and human experts. Lastly, we correlate traditional quantitative NLP metrics with reader study scores to enhance our understanding of how these metrics align with physician preferences. Our research marks the first evidence of LLMs outperforming human experts in clinical text summarization across multiple tasks. This implies that integrating LLMs into clinical workflows could alleviate documentation burden, empowering clinicians to focus more on personalized patient care and the inherently human aspects of medicine.
format Online
Article
Text
id pubmed-10635391
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher American Journal Experts
record_format MEDLINE/PubMed
spelling pubmed-106353912023-11-13 Clinical Text Summarization: Adapting Large Language Models Can Outperform Human Experts Van Veen, Dave Van Uden, Cara Blankemeier, Louis Delbrouck, Jean-Benoit Aali, Asad Bluethgen, Christian Pareek, Anuj Polacin, Malgorzata Reis, Eduardo Pontes Seehofnerová, Anna Rohatgi, Nidhi Hosamani, Poonam Collins, William Ahuja, Neera Langlotz, Curtis P. Hom, Jason Gatidis, Sergios Pauly, John Chaudhari, Akshay S. Res Sq Article Sifting through vast textual data and summarizing key information from electronic health records (EHR) imposes a substantial burden on how clinicians allocate their time. Although large language models (LLMs) have shown immense promise in natural language processing (NLP) tasks, their efficacy on a diverse range of clinical summarization tasks has not yet been rigorously demonstrated. In this work, we apply domain adaptation methods to eight LLMs, spanning six datasets and four distinct clinical summarization tasks: radiology reports, patient questions, progress notes, and doctor-patient dialogue. Our thorough quantitative assessment reveals trade-offs between models and adaptation methods in addition to instances where recent advances in LLMs may not improve results. Further, in a clinical reader study with ten physicians, we show that summaries from our best-adapted LLMs are preferable to human summaries in terms of completeness and correctness. Our ensuing qualitative analysis highlights challenges faced by both LLMs and human experts. Lastly, we correlate traditional quantitative NLP metrics with reader study scores to enhance our understanding of how these metrics align with physician preferences. Our research marks the first evidence of LLMs outperforming human experts in clinical text summarization across multiple tasks. This implies that integrating LLMs into clinical workflows could alleviate documentation burden, empowering clinicians to focus more on personalized patient care and the inherently human aspects of medicine. American Journal Experts 2023-10-30 /pmc/articles/PMC10635391/ /pubmed/37961377 http://dx.doi.org/10.21203/rs.3.rs-3483777/v1 Text en https://creativecommons.org/licenses/by/4.0/This work is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/) , which allows reusers to distribute, remix, adapt, and build upon the material in any medium or format, so long as attribution is given to the creator. The license allows for commercial use.
spellingShingle Article
Van Veen, Dave
Van Uden, Cara
Blankemeier, Louis
Delbrouck, Jean-Benoit
Aali, Asad
Bluethgen, Christian
Pareek, Anuj
Polacin, Malgorzata
Reis, Eduardo Pontes
Seehofnerová, Anna
Rohatgi, Nidhi
Hosamani, Poonam
Collins, William
Ahuja, Neera
Langlotz, Curtis P.
Hom, Jason
Gatidis, Sergios
Pauly, John
Chaudhari, Akshay S.
Clinical Text Summarization: Adapting Large Language Models Can Outperform Human Experts
title Clinical Text Summarization: Adapting Large Language Models Can Outperform Human Experts
title_full Clinical Text Summarization: Adapting Large Language Models Can Outperform Human Experts
title_fullStr Clinical Text Summarization: Adapting Large Language Models Can Outperform Human Experts
title_full_unstemmed Clinical Text Summarization: Adapting Large Language Models Can Outperform Human Experts
title_short Clinical Text Summarization: Adapting Large Language Models Can Outperform Human Experts
title_sort clinical text summarization: adapting large language models can outperform human experts
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10635391/
https://www.ncbi.nlm.nih.gov/pubmed/37961377
http://dx.doi.org/10.21203/rs.3.rs-3483777/v1
work_keys_str_mv AT vanveendave clinicaltextsummarizationadaptinglargelanguagemodelscanoutperformhumanexperts
AT vanudencara clinicaltextsummarizationadaptinglargelanguagemodelscanoutperformhumanexperts
AT blankemeierlouis clinicaltextsummarizationadaptinglargelanguagemodelscanoutperformhumanexperts
AT delbrouckjeanbenoit clinicaltextsummarizationadaptinglargelanguagemodelscanoutperformhumanexperts
AT aaliasad clinicaltextsummarizationadaptinglargelanguagemodelscanoutperformhumanexperts
AT bluethgenchristian clinicaltextsummarizationadaptinglargelanguagemodelscanoutperformhumanexperts
AT pareekanuj clinicaltextsummarizationadaptinglargelanguagemodelscanoutperformhumanexperts
AT polacinmalgorzata clinicaltextsummarizationadaptinglargelanguagemodelscanoutperformhumanexperts
AT reiseduardopontes clinicaltextsummarizationadaptinglargelanguagemodelscanoutperformhumanexperts
AT seehofnerovaanna clinicaltextsummarizationadaptinglargelanguagemodelscanoutperformhumanexperts
AT rohatginidhi clinicaltextsummarizationadaptinglargelanguagemodelscanoutperformhumanexperts
AT hosamanipoonam clinicaltextsummarizationadaptinglargelanguagemodelscanoutperformhumanexperts
AT collinswilliam clinicaltextsummarizationadaptinglargelanguagemodelscanoutperformhumanexperts
AT ahujaneera clinicaltextsummarizationadaptinglargelanguagemodelscanoutperformhumanexperts
AT langlotzcurtisp clinicaltextsummarizationadaptinglargelanguagemodelscanoutperformhumanexperts
AT homjason clinicaltextsummarizationadaptinglargelanguagemodelscanoutperformhumanexperts
AT gatidissergios clinicaltextsummarizationadaptinglargelanguagemodelscanoutperformhumanexperts
AT paulyjohn clinicaltextsummarizationadaptinglargelanguagemodelscanoutperformhumanexperts
AT chaudhariakshays clinicaltextsummarizationadaptinglargelanguagemodelscanoutperformhumanexperts