Cargando…

BioBERT: a pre-trained biomedical language representation model for biomedical text mining

MOTIVATION: Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in natural language processing (NLP), extracting valuable information from biomedical literature has gained popularity among researchers, and deep learning has...

Descripción completa

Detalles Bibliográficos
Autores principales: Lee, Jinhyuk, Yoon, Wonjin, Kim, Sungdong, Kim, Donghyeon, Kim, Sunkyu, So, Chan Ho, Kang, Jaewoo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Oxford University Press 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7703786/
https://www.ncbi.nlm.nih.gov/pubmed/31501885
http://dx.doi.org/10.1093/bioinformatics/btz682
_version_ 1783616696187092992
author Lee, Jinhyuk
Yoon, Wonjin
Kim, Sungdong
Kim, Donghyeon
Kim, Sunkyu
So, Chan Ho
Kang, Jaewoo
author_facet Lee, Jinhyuk
Yoon, Wonjin
Kim, Sungdong
Kim, Donghyeon
Kim, Sunkyu
So, Chan Ho
Kang, Jaewoo
author_sort Lee, Jinhyuk
collection PubMed
description MOTIVATION: Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in natural language processing (NLP), extracting valuable information from biomedical literature has gained popularity among researchers, and deep learning has boosted the development of effective biomedical text mining models. However, directly applying the advancements in NLP to biomedical text mining often yields unsatisfactory results due to a word distribution shift from general domain corpora to biomedical corpora. In this article, we investigate how the recently introduced pre-trained language model BERT can be adapted for biomedical corpora. RESULTS: We introduce BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain-specific language representation model pre-trained on large-scale biomedical corpora. With almost the same architecture across tasks, BioBERT largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre-trained on biomedical corpora. While BERT obtains performance comparable to that of previous state-of-the-art models, BioBERT significantly outperforms them on the following three representative biomedical text mining tasks: biomedical named entity recognition (0.62% F1 score improvement), biomedical relation extraction (2.80% F1 score improvement) and biomedical question answering (12.24% MRR improvement). Our analysis results show that pre-training BERT on biomedical corpora helps it to understand complex biomedical texts. AVAILABILITY AND IMPLEMENTATION: We make the pre-trained weights of BioBERT freely available at https://github.com/naver/biobert-pretrained, and the source code for fine-tuning BioBERT available at https://github.com/dmis-lab/biobert.
format Online
Article
Text
id pubmed-7703786
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Oxford University Press
record_format MEDLINE/PubMed
spelling pubmed-77037862020-12-07 BioBERT: a pre-trained biomedical language representation model for biomedical text mining Lee, Jinhyuk Yoon, Wonjin Kim, Sungdong Kim, Donghyeon Kim, Sunkyu So, Chan Ho Kang, Jaewoo Bioinformatics Original Papers MOTIVATION: Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in natural language processing (NLP), extracting valuable information from biomedical literature has gained popularity among researchers, and deep learning has boosted the development of effective biomedical text mining models. However, directly applying the advancements in NLP to biomedical text mining often yields unsatisfactory results due to a word distribution shift from general domain corpora to biomedical corpora. In this article, we investigate how the recently introduced pre-trained language model BERT can be adapted for biomedical corpora. RESULTS: We introduce BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain-specific language representation model pre-trained on large-scale biomedical corpora. With almost the same architecture across tasks, BioBERT largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre-trained on biomedical corpora. While BERT obtains performance comparable to that of previous state-of-the-art models, BioBERT significantly outperforms them on the following three representative biomedical text mining tasks: biomedical named entity recognition (0.62% F1 score improvement), biomedical relation extraction (2.80% F1 score improvement) and biomedical question answering (12.24% MRR improvement). Our analysis results show that pre-training BERT on biomedical corpora helps it to understand complex biomedical texts. AVAILABILITY AND IMPLEMENTATION: We make the pre-trained weights of BioBERT freely available at https://github.com/naver/biobert-pretrained, and the source code for fine-tuning BioBERT available at https://github.com/dmis-lab/biobert. Oxford University Press 2020-02-15 2019-09-10 /pmc/articles/PMC7703786/ /pubmed/31501885 http://dx.doi.org/10.1093/bioinformatics/btz682 Text en © The Author(s) 2019. Published by Oxford University Press. http://creativecommons.org/licenses/by/4.0/ This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Original Papers
Lee, Jinhyuk
Yoon, Wonjin
Kim, Sungdong
Kim, Donghyeon
Kim, Sunkyu
So, Chan Ho
Kang, Jaewoo
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
title BioBERT: a pre-trained biomedical language representation model for biomedical text mining
title_full BioBERT: a pre-trained biomedical language representation model for biomedical text mining
title_fullStr BioBERT: a pre-trained biomedical language representation model for biomedical text mining
title_full_unstemmed BioBERT: a pre-trained biomedical language representation model for biomedical text mining
title_short BioBERT: a pre-trained biomedical language representation model for biomedical text mining
title_sort biobert: a pre-trained biomedical language representation model for biomedical text mining
topic Original Papers
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7703786/
https://www.ncbi.nlm.nih.gov/pubmed/31501885
http://dx.doi.org/10.1093/bioinformatics/btz682
work_keys_str_mv AT leejinhyuk biobertapretrainedbiomedicallanguagerepresentationmodelforbiomedicaltextmining
AT yoonwonjin biobertapretrainedbiomedicallanguagerepresentationmodelforbiomedicaltextmining
AT kimsungdong biobertapretrainedbiomedicallanguagerepresentationmodelforbiomedicaltextmining
AT kimdonghyeon biobertapretrainedbiomedicallanguagerepresentationmodelforbiomedicaltextmining
AT kimsunkyu biobertapretrainedbiomedicallanguagerepresentationmodelforbiomedicaltextmining
AT sochanho biobertapretrainedbiomedicallanguagerepresentationmodelforbiomedicaltextmining
AT kangjaewoo biobertapretrainedbiomedicallanguagerepresentationmodelforbiomedicaltextmining