Cargando…
Improving the recall of biomedical named entity recognition with label re-correction and knowledge distillation
BACKGROUND: Biomedical named entity recognition is one of the most essential tasks in biomedical information extraction. Previous studies suffer from inadequate annotated datasets, especially the limited knowledge contained in them. METHODS: To remedy the above issue, we propose a novel Biomedical N...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8170952/ https://www.ncbi.nlm.nih.gov/pubmed/34078270 http://dx.doi.org/10.1186/s12859-021-04200-w |
_version_ | 1783702341151621120 |
---|---|
author | Zhou, Huiwei Liu, Zhe Lang, Chengkun Xu, Yibin Lin, Yingyu Hou, Junjie |
author_facet | Zhou, Huiwei Liu, Zhe Lang, Chengkun Xu, Yibin Lin, Yingyu Hou, Junjie |
author_sort | Zhou, Huiwei |
collection | PubMed |
description | BACKGROUND: Biomedical named entity recognition is one of the most essential tasks in biomedical information extraction. Previous studies suffer from inadequate annotated datasets, especially the limited knowledge contained in them. METHODS: To remedy the above issue, we propose a novel Biomedical Named Entity Recognition (BioNER) framework with label re-correction and knowledge distillation strategies, which could not only create large and high-quality datasets but also obtain a high-performance recognition model. Our framework is inspired by two points: (1) named entity recognition should be considered from the perspective of both coverage and accuracy; (2) trustable annotations should be yielded by iterative correction. Firstly, for coverage, we annotate chemical and disease entities in a large-scale unlabeled dataset by PubTator to generate a weakly labeled dataset. For accuracy, we then filter it by utilizing multiple knowledge bases to generate another weakly labeled dataset. Next, the two datasets are revised by a label re-correction strategy to construct two high-quality datasets, which are used to train two recognition models, respectively. Finally, we compress the knowledge in the two models into a single recognition model with knowledge distillation. RESULTS: Experiments on the BioCreative V chemical-disease relation corpus and NCBI Disease corpus show that knowledge from large-scale datasets significantly improves the performance of BioNER, especially the recall of it, leading to new state-of-the-art results. CONCLUSIONS: We propose a framework with label re-correction and knowledge distillation strategies. Comparison results show that the two perspectives of knowledge in the two re-corrected datasets respectively are complementary and both effective for BioNER. |
format | Online Article Text |
id | pubmed-8170952 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-81709522021-06-03 Improving the recall of biomedical named entity recognition with label re-correction and knowledge distillation Zhou, Huiwei Liu, Zhe Lang, Chengkun Xu, Yibin Lin, Yingyu Hou, Junjie BMC Bioinformatics Research BACKGROUND: Biomedical named entity recognition is one of the most essential tasks in biomedical information extraction. Previous studies suffer from inadequate annotated datasets, especially the limited knowledge contained in them. METHODS: To remedy the above issue, we propose a novel Biomedical Named Entity Recognition (BioNER) framework with label re-correction and knowledge distillation strategies, which could not only create large and high-quality datasets but also obtain a high-performance recognition model. Our framework is inspired by two points: (1) named entity recognition should be considered from the perspective of both coverage and accuracy; (2) trustable annotations should be yielded by iterative correction. Firstly, for coverage, we annotate chemical and disease entities in a large-scale unlabeled dataset by PubTator to generate a weakly labeled dataset. For accuracy, we then filter it by utilizing multiple knowledge bases to generate another weakly labeled dataset. Next, the two datasets are revised by a label re-correction strategy to construct two high-quality datasets, which are used to train two recognition models, respectively. Finally, we compress the knowledge in the two models into a single recognition model with knowledge distillation. RESULTS: Experiments on the BioCreative V chemical-disease relation corpus and NCBI Disease corpus show that knowledge from large-scale datasets significantly improves the performance of BioNER, especially the recall of it, leading to new state-of-the-art results. CONCLUSIONS: We propose a framework with label re-correction and knowledge distillation strategies. Comparison results show that the two perspectives of knowledge in the two re-corrected datasets respectively are complementary and both effective for BioNER. BioMed Central 2021-06-02 /pmc/articles/PMC8170952/ /pubmed/34078270 http://dx.doi.org/10.1186/s12859-021-04200-w Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data. |
spellingShingle | Research Zhou, Huiwei Liu, Zhe Lang, Chengkun Xu, Yibin Lin, Yingyu Hou, Junjie Improving the recall of biomedical named entity recognition with label re-correction and knowledge distillation |
title | Improving the recall of biomedical named entity recognition with label re-correction and knowledge distillation |
title_full | Improving the recall of biomedical named entity recognition with label re-correction and knowledge distillation |
title_fullStr | Improving the recall of biomedical named entity recognition with label re-correction and knowledge distillation |
title_full_unstemmed | Improving the recall of biomedical named entity recognition with label re-correction and knowledge distillation |
title_short | Improving the recall of biomedical named entity recognition with label re-correction and knowledge distillation |
title_sort | improving the recall of biomedical named entity recognition with label re-correction and knowledge distillation |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8170952/ https://www.ncbi.nlm.nih.gov/pubmed/34078270 http://dx.doi.org/10.1186/s12859-021-04200-w |
work_keys_str_mv | AT zhouhuiwei improvingtherecallofbiomedicalnamedentityrecognitionwithlabelrecorrectionandknowledgedistillation AT liuzhe improvingtherecallofbiomedicalnamedentityrecognitionwithlabelrecorrectionandknowledgedistillation AT langchengkun improvingtherecallofbiomedicalnamedentityrecognitionwithlabelrecorrectionandknowledgedistillation AT xuyibin improvingtherecallofbiomedicalnamedentityrecognitionwithlabelrecorrectionandknowledgedistillation AT linyingyu improvingtherecallofbiomedicalnamedentityrecognitionwithlabelrecorrectionandknowledgedistillation AT houjunjie improvingtherecallofbiomedicalnamedentityrecognitionwithlabelrecorrectionandknowledgedistillation |