Cargando…

A medical multimodal large language model for future pandemics

Deep neural networks have been integrated into the whole clinical decision procedure which can improve the efficiency of diagnosis and alleviate the heavy workload of physicians. Since most neural networks are supervised, their performance heavily depends on the volume and quality of available label...

Descripción completa

Detalles Bibliográficos
Autores principales: Liu, Fenglin, Zhu, Tingting, Wu, Xian, Yang, Bang, You, Chenyu, Wang, Chenyang, Lu, Lei, Liu, Zhangdaihong, Zheng, Yefeng, Sun, Xu, Yang, Yang, Clifton, Lei, Clifton, David A.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10693607/
https://www.ncbi.nlm.nih.gov/pubmed/38042919
http://dx.doi.org/10.1038/s41746-023-00952-2
_version_ 1785153199329509376
author Liu, Fenglin
Zhu, Tingting
Wu, Xian
Yang, Bang
You, Chenyu
Wang, Chenyang
Lu, Lei
Liu, Zhangdaihong
Zheng, Yefeng
Sun, Xu
Yang, Yang
Clifton, Lei
Clifton, David A.
author_facet Liu, Fenglin
Zhu, Tingting
Wu, Xian
Yang, Bang
You, Chenyu
Wang, Chenyang
Lu, Lei
Liu, Zhangdaihong
Zheng, Yefeng
Sun, Xu
Yang, Yang
Clifton, Lei
Clifton, David A.
author_sort Liu, Fenglin
collection PubMed
description Deep neural networks have been integrated into the whole clinical decision procedure which can improve the efficiency of diagnosis and alleviate the heavy workload of physicians. Since most neural networks are supervised, their performance heavily depends on the volume and quality of available labels. However, few such labels exist for rare diseases (e.g., new pandemics). Here we report a medical multimodal large language model (Med-MLLM) for radiograph representation learning, which can learn broad medical knowledge (e.g., image understanding, text semantics, and clinical phenotypes) from unlabelled data. As a result, when encountering a rare disease, our Med-MLLM can be rapidly deployed and easily adapted to them with limited labels. Furthermore, our model supports medical data across visual modality (e.g., chest X-ray and CT) and textual modality (e.g., medical report and free-text clinical note); therefore, it can be used for clinical tasks that involve both visual and textual data. We demonstrate the effectiveness of our Med-MLLM by showing how it would perform using the COVID-19 pandemic “in replay”. In the retrospective setting, we test the model on the early COVID-19 datasets; and in the prospective setting, we test the model on the new variant COVID-19-Omicron. The experiments are conducted on 1) three kinds of input data; 2) three kinds of downstream tasks, including disease reporting, diagnosis, and prognosis; 3) five COVID-19 datasets; and 4) three different languages, including English, Chinese, and Spanish. All experiments show that our model can make accurate and robust COVID-19 decision-support with little labelled data.
format Online
Article
Text
id pubmed-10693607
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-106936072023-12-04 A medical multimodal large language model for future pandemics Liu, Fenglin Zhu, Tingting Wu, Xian Yang, Bang You, Chenyu Wang, Chenyang Lu, Lei Liu, Zhangdaihong Zheng, Yefeng Sun, Xu Yang, Yang Clifton, Lei Clifton, David A. NPJ Digit Med Article Deep neural networks have been integrated into the whole clinical decision procedure which can improve the efficiency of diagnosis and alleviate the heavy workload of physicians. Since most neural networks are supervised, their performance heavily depends on the volume and quality of available labels. However, few such labels exist for rare diseases (e.g., new pandemics). Here we report a medical multimodal large language model (Med-MLLM) for radiograph representation learning, which can learn broad medical knowledge (e.g., image understanding, text semantics, and clinical phenotypes) from unlabelled data. As a result, when encountering a rare disease, our Med-MLLM can be rapidly deployed and easily adapted to them with limited labels. Furthermore, our model supports medical data across visual modality (e.g., chest X-ray and CT) and textual modality (e.g., medical report and free-text clinical note); therefore, it can be used for clinical tasks that involve both visual and textual data. We demonstrate the effectiveness of our Med-MLLM by showing how it would perform using the COVID-19 pandemic “in replay”. In the retrospective setting, we test the model on the early COVID-19 datasets; and in the prospective setting, we test the model on the new variant COVID-19-Omicron. The experiments are conducted on 1) three kinds of input data; 2) three kinds of downstream tasks, including disease reporting, diagnosis, and prognosis; 3) five COVID-19 datasets; and 4) three different languages, including English, Chinese, and Spanish. All experiments show that our model can make accurate and robust COVID-19 decision-support with little labelled data. Nature Publishing Group UK 2023-12-02 /pmc/articles/PMC10693607/ /pubmed/38042919 http://dx.doi.org/10.1038/s41746-023-00952-2 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Liu, Fenglin
Zhu, Tingting
Wu, Xian
Yang, Bang
You, Chenyu
Wang, Chenyang
Lu, Lei
Liu, Zhangdaihong
Zheng, Yefeng
Sun, Xu
Yang, Yang
Clifton, Lei
Clifton, David A.
A medical multimodal large language model for future pandemics
title A medical multimodal large language model for future pandemics
title_full A medical multimodal large language model for future pandemics
title_fullStr A medical multimodal large language model for future pandemics
title_full_unstemmed A medical multimodal large language model for future pandemics
title_short A medical multimodal large language model for future pandemics
title_sort medical multimodal large language model for future pandemics
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10693607/
https://www.ncbi.nlm.nih.gov/pubmed/38042919
http://dx.doi.org/10.1038/s41746-023-00952-2
work_keys_str_mv AT liufenglin amedicalmultimodallargelanguagemodelforfuturepandemics
AT zhutingting amedicalmultimodallargelanguagemodelforfuturepandemics
AT wuxian amedicalmultimodallargelanguagemodelforfuturepandemics
AT yangbang amedicalmultimodallargelanguagemodelforfuturepandemics
AT youchenyu amedicalmultimodallargelanguagemodelforfuturepandemics
AT wangchenyang amedicalmultimodallargelanguagemodelforfuturepandemics
AT lulei amedicalmultimodallargelanguagemodelforfuturepandemics
AT liuzhangdaihong amedicalmultimodallargelanguagemodelforfuturepandemics
AT zhengyefeng amedicalmultimodallargelanguagemodelforfuturepandemics
AT sunxu amedicalmultimodallargelanguagemodelforfuturepandemics
AT yangyang amedicalmultimodallargelanguagemodelforfuturepandemics
AT cliftonlei amedicalmultimodallargelanguagemodelforfuturepandemics
AT cliftondavida amedicalmultimodallargelanguagemodelforfuturepandemics
AT liufenglin medicalmultimodallargelanguagemodelforfuturepandemics
AT zhutingting medicalmultimodallargelanguagemodelforfuturepandemics
AT wuxian medicalmultimodallargelanguagemodelforfuturepandemics
AT yangbang medicalmultimodallargelanguagemodelforfuturepandemics
AT youchenyu medicalmultimodallargelanguagemodelforfuturepandemics
AT wangchenyang medicalmultimodallargelanguagemodelforfuturepandemics
AT lulei medicalmultimodallargelanguagemodelforfuturepandemics
AT liuzhangdaihong medicalmultimodallargelanguagemodelforfuturepandemics
AT zhengyefeng medicalmultimodallargelanguagemodelforfuturepandemics
AT sunxu medicalmultimodallargelanguagemodelforfuturepandemics
AT yangyang medicalmultimodallargelanguagemodelforfuturepandemics
AT cliftonlei medicalmultimodallargelanguagemodelforfuturepandemics
AT cliftondavida medicalmultimodallargelanguagemodelforfuturepandemics