Cargando…

Leveraging pre-trained language models for mining microbiome-disease relationships

BACKGROUND: The growing recognition of the microbiome’s impact on human health and well-being has prompted extensive research into discovering the links between microbiome dysbiosis and disease (healthy) states. However, this valuable information is scattered in unstructured form within biomedical l...

Descripción completa

Detalles Bibliográficos
Autores principales: Karkera, Nikitha, Acharya, Sathwik, Palaniappan, Sucheendra K.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10357883/
https://www.ncbi.nlm.nih.gov/pubmed/37468830
http://dx.doi.org/10.1186/s12859-023-05411-z
_version_ 1785075592379498496
author Karkera, Nikitha
Acharya, Sathwik
Palaniappan, Sucheendra K.
author_facet Karkera, Nikitha
Acharya, Sathwik
Palaniappan, Sucheendra K.
author_sort Karkera, Nikitha
collection PubMed
description BACKGROUND: The growing recognition of the microbiome’s impact on human health and well-being has prompted extensive research into discovering the links between microbiome dysbiosis and disease (healthy) states. However, this valuable information is scattered in unstructured form within biomedical literature. The structured extraction and qualification of microbe-disease interactions are important. In parallel, recent advancements in deep-learning-based natural language processing algorithms have revolutionized language-related tasks such as ours. This study aims to leverage state-of-the-art deep-learning language models to extract microbe-disease relationships from biomedical literature. RESULTS: In this study, we first evaluate multiple pre-trained large language models within a zero-shot or few-shot learning context. In this setting, the models performed poorly out of the box, emphasizing the need for domain-specific fine-tuning of these language models. Subsequently, we fine-tune multiple language models (specifically, GPT-3, BioGPT, BioMedLM, BERT, BioMegatron, PubMedBERT, BioClinicalBERT, and BioLinkBERT) using labeled training data and evaluate their performance. Our experimental results demonstrate the state-of-the-art performance of these fine-tuned models ( specifically GPT-3, BioMedLM, and BioLinkBERT), achieving an average F1 score, precision, and recall of over [Formula: see text] compared to the previous best of  0.74. CONCLUSION: Overall, this study establishes that pre-trained language models excel as transfer learners when fine-tuned with domain and problem-specific data, enabling them to achieve state-of-the-art results even with limited training data for extracting microbiome-disease interactions from scientific publications. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at (10.1186/s12859-023-05411-z).
format Online
Article
Text
id pubmed-10357883
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-103578832023-07-21 Leveraging pre-trained language models for mining microbiome-disease relationships Karkera, Nikitha Acharya, Sathwik Palaniappan, Sucheendra K. BMC Bioinformatics Research BACKGROUND: The growing recognition of the microbiome’s impact on human health and well-being has prompted extensive research into discovering the links between microbiome dysbiosis and disease (healthy) states. However, this valuable information is scattered in unstructured form within biomedical literature. The structured extraction and qualification of microbe-disease interactions are important. In parallel, recent advancements in deep-learning-based natural language processing algorithms have revolutionized language-related tasks such as ours. This study aims to leverage state-of-the-art deep-learning language models to extract microbe-disease relationships from biomedical literature. RESULTS: In this study, we first evaluate multiple pre-trained large language models within a zero-shot or few-shot learning context. In this setting, the models performed poorly out of the box, emphasizing the need for domain-specific fine-tuning of these language models. Subsequently, we fine-tune multiple language models (specifically, GPT-3, BioGPT, BioMedLM, BERT, BioMegatron, PubMedBERT, BioClinicalBERT, and BioLinkBERT) using labeled training data and evaluate their performance. Our experimental results demonstrate the state-of-the-art performance of these fine-tuned models ( specifically GPT-3, BioMedLM, and BioLinkBERT), achieving an average F1 score, precision, and recall of over [Formula: see text] compared to the previous best of  0.74. CONCLUSION: Overall, this study establishes that pre-trained language models excel as transfer learners when fine-tuned with domain and problem-specific data, enabling them to achieve state-of-the-art results even with limited training data for extracting microbiome-disease interactions from scientific publications. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at (10.1186/s12859-023-05411-z). BioMed Central 2023-07-19 /pmc/articles/PMC10357883/ /pubmed/37468830 http://dx.doi.org/10.1186/s12859-023-05411-z Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Research
Karkera, Nikitha
Acharya, Sathwik
Palaniappan, Sucheendra K.
Leveraging pre-trained language models for mining microbiome-disease relationships
title Leveraging pre-trained language models for mining microbiome-disease relationships
title_full Leveraging pre-trained language models for mining microbiome-disease relationships
title_fullStr Leveraging pre-trained language models for mining microbiome-disease relationships
title_full_unstemmed Leveraging pre-trained language models for mining microbiome-disease relationships
title_short Leveraging pre-trained language models for mining microbiome-disease relationships
title_sort leveraging pre-trained language models for mining microbiome-disease relationships
topic Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10357883/
https://www.ncbi.nlm.nih.gov/pubmed/37468830
http://dx.doi.org/10.1186/s12859-023-05411-z
work_keys_str_mv AT karkeranikitha leveragingpretrainedlanguagemodelsforminingmicrobiomediseaserelationships
AT acharyasathwik leveragingpretrainedlanguagemodelsforminingmicrobiomediseaserelationships
AT palaniappansucheendrak leveragingpretrainedlanguagemodelsforminingmicrobiomediseaserelationships