Cargando…

A large language model for electronic health records

There is an increasing interest in developing artificial intelligence (AI) systems to process and interpret electronic health records (EHRs). Natural language processing (NLP) powered by pretrained language models is the key technology for medical AI systems utilizing clinical narratives. However, t...

Descripción completa

Detalles Bibliográficos
Autores principales: Yang, Xi, Chen, Aokun, PourNejatian, Nima, Shin, Hoo Chang, Smith, Kaleb E., Parisien, Christopher, Compas, Colin, Martin, Cheryl, Costa, Anthony B., Flores, Mona G., Zhang, Ying, Magoc, Tanja, Harle, Christopher A., Lipori, Gloria, Mitchell, Duane A., Hogan, William R., Shenkman, Elizabeth A., Bian, Jiang, Wu, Yonghui
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9792464/
https://www.ncbi.nlm.nih.gov/pubmed/36572766
http://dx.doi.org/10.1038/s41746-022-00742-2
_version_ 1784859638946070528
author Yang, Xi
Chen, Aokun
PourNejatian, Nima
Shin, Hoo Chang
Smith, Kaleb E.
Parisien, Christopher
Compas, Colin
Martin, Cheryl
Costa, Anthony B.
Flores, Mona G.
Zhang, Ying
Magoc, Tanja
Harle, Christopher A.
Lipori, Gloria
Mitchell, Duane A.
Hogan, William R.
Shenkman, Elizabeth A.
Bian, Jiang
Wu, Yonghui
author_facet Yang, Xi
Chen, Aokun
PourNejatian, Nima
Shin, Hoo Chang
Smith, Kaleb E.
Parisien, Christopher
Compas, Colin
Martin, Cheryl
Costa, Anthony B.
Flores, Mona G.
Zhang, Ying
Magoc, Tanja
Harle, Christopher A.
Lipori, Gloria
Mitchell, Duane A.
Hogan, William R.
Shenkman, Elizabeth A.
Bian, Jiang
Wu, Yonghui
author_sort Yang, Xi
collection PubMed
description There is an increasing interest in developing artificial intelligence (AI) systems to process and interpret electronic health records (EHRs). Natural language processing (NLP) powered by pretrained language models is the key technology for medical AI systems utilizing clinical narratives. However, there are few clinical language models, the largest of which trained in the clinical domain is comparatively small at 110 million parameters (compared with billions of parameters in the general domain). It is not clear how large clinical language models with billions of parameters can help medical AI systems utilize unstructured EHRs. In this study, we develop from scratch a large clinical language model—GatorTron—using >90 billion words of text (including >82 billion words of de-identified clinical text) and systematically evaluate it on five clinical NLP tasks including clinical concept extraction, medical relation extraction, semantic textual similarity, natural language inference (NLI), and medical question answering (MQA). We examine how (1) scaling up the number of parameters and (2) scaling up the size of the training data could benefit these NLP tasks. GatorTron models scale up the clinical language model from 110 million to 8.9 billion parameters and improve five clinical NLP tasks (e.g., 9.6% and 9.5% improvement in accuracy for NLI and MQA), which can be applied to medical AI systems to improve healthcare delivery. The GatorTron models are publicly available at: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/models/gatortron_og.
format Online
Article
Text
id pubmed-9792464
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-97924642022-12-28 A large language model for electronic health records Yang, Xi Chen, Aokun PourNejatian, Nima Shin, Hoo Chang Smith, Kaleb E. Parisien, Christopher Compas, Colin Martin, Cheryl Costa, Anthony B. Flores, Mona G. Zhang, Ying Magoc, Tanja Harle, Christopher A. Lipori, Gloria Mitchell, Duane A. Hogan, William R. Shenkman, Elizabeth A. Bian, Jiang Wu, Yonghui NPJ Digit Med Article There is an increasing interest in developing artificial intelligence (AI) systems to process and interpret electronic health records (EHRs). Natural language processing (NLP) powered by pretrained language models is the key technology for medical AI systems utilizing clinical narratives. However, there are few clinical language models, the largest of which trained in the clinical domain is comparatively small at 110 million parameters (compared with billions of parameters in the general domain). It is not clear how large clinical language models with billions of parameters can help medical AI systems utilize unstructured EHRs. In this study, we develop from scratch a large clinical language model—GatorTron—using >90 billion words of text (including >82 billion words of de-identified clinical text) and systematically evaluate it on five clinical NLP tasks including clinical concept extraction, medical relation extraction, semantic textual similarity, natural language inference (NLI), and medical question answering (MQA). We examine how (1) scaling up the number of parameters and (2) scaling up the size of the training data could benefit these NLP tasks. GatorTron models scale up the clinical language model from 110 million to 8.9 billion parameters and improve five clinical NLP tasks (e.g., 9.6% and 9.5% improvement in accuracy for NLI and MQA), which can be applied to medical AI systems to improve healthcare delivery. The GatorTron models are publicly available at: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/clara/models/gatortron_og. Nature Publishing Group UK 2022-12-26 /pmc/articles/PMC9792464/ /pubmed/36572766 http://dx.doi.org/10.1038/s41746-022-00742-2 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Yang, Xi
Chen, Aokun
PourNejatian, Nima
Shin, Hoo Chang
Smith, Kaleb E.
Parisien, Christopher
Compas, Colin
Martin, Cheryl
Costa, Anthony B.
Flores, Mona G.
Zhang, Ying
Magoc, Tanja
Harle, Christopher A.
Lipori, Gloria
Mitchell, Duane A.
Hogan, William R.
Shenkman, Elizabeth A.
Bian, Jiang
Wu, Yonghui
A large language model for electronic health records
title A large language model for electronic health records
title_full A large language model for electronic health records
title_fullStr A large language model for electronic health records
title_full_unstemmed A large language model for electronic health records
title_short A large language model for electronic health records
title_sort large language model for electronic health records
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9792464/
https://www.ncbi.nlm.nih.gov/pubmed/36572766
http://dx.doi.org/10.1038/s41746-022-00742-2
work_keys_str_mv AT yangxi alargelanguagemodelforelectronichealthrecords
AT chenaokun alargelanguagemodelforelectronichealthrecords
AT pournejatiannima alargelanguagemodelforelectronichealthrecords
AT shinhoochang alargelanguagemodelforelectronichealthrecords
AT smithkalebe alargelanguagemodelforelectronichealthrecords
AT parisienchristopher alargelanguagemodelforelectronichealthrecords
AT compascolin alargelanguagemodelforelectronichealthrecords
AT martincheryl alargelanguagemodelforelectronichealthrecords
AT costaanthonyb alargelanguagemodelforelectronichealthrecords
AT floresmonag alargelanguagemodelforelectronichealthrecords
AT zhangying alargelanguagemodelforelectronichealthrecords
AT magoctanja alargelanguagemodelforelectronichealthrecords
AT harlechristophera alargelanguagemodelforelectronichealthrecords
AT liporigloria alargelanguagemodelforelectronichealthrecords
AT mitchellduanea alargelanguagemodelforelectronichealthrecords
AT hoganwilliamr alargelanguagemodelforelectronichealthrecords
AT shenkmanelizabetha alargelanguagemodelforelectronichealthrecords
AT bianjiang alargelanguagemodelforelectronichealthrecords
AT wuyonghui alargelanguagemodelforelectronichealthrecords
AT yangxi largelanguagemodelforelectronichealthrecords
AT chenaokun largelanguagemodelforelectronichealthrecords
AT pournejatiannima largelanguagemodelforelectronichealthrecords
AT shinhoochang largelanguagemodelforelectronichealthrecords
AT smithkalebe largelanguagemodelforelectronichealthrecords
AT parisienchristopher largelanguagemodelforelectronichealthrecords
AT compascolin largelanguagemodelforelectronichealthrecords
AT martincheryl largelanguagemodelforelectronichealthrecords
AT costaanthonyb largelanguagemodelforelectronichealthrecords
AT floresmonag largelanguagemodelforelectronichealthrecords
AT zhangying largelanguagemodelforelectronichealthrecords
AT magoctanja largelanguagemodelforelectronichealthrecords
AT harlechristophera largelanguagemodelforelectronichealthrecords
AT liporigloria largelanguagemodelforelectronichealthrecords
AT mitchellduanea largelanguagemodelforelectronichealthrecords
AT hoganwilliamr largelanguagemodelforelectronichealthrecords
AT shenkmanelizabetha largelanguagemodelforelectronichealthrecords
AT bianjiang largelanguagemodelforelectronichealthrecords
AT wuyonghui largelanguagemodelforelectronichealthrecords