Cargando…

Deep learning with sentence embeddings pre-trained on biomedical corpora improves the performance of finding similar sentences in electronic medical records

BACKGROUND: Capturing sentence semantics plays a vital role in a range of text mining applications. Despite continuous efforts on the development of related datasets and models in the general domain, both datasets and models are limited in biomedical and clinical domains. The BioCreative/OHNLP2018 o...

Descripción completa

Detalles Bibliográficos
Autores principales: Chen, Qingyu, Du, Jingcheng, Kim, Sun, Wilbur, W. John, Lu, Zhiyong
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7191680/
https://www.ncbi.nlm.nih.gov/pubmed/32349758
http://dx.doi.org/10.1186/s12911-020-1044-0
_version_ 1783527889072816128
author Chen, Qingyu
Du, Jingcheng
Kim, Sun
Wilbur, W. John
Lu, Zhiyong
author_facet Chen, Qingyu
Du, Jingcheng
Kim, Sun
Wilbur, W. John
Lu, Zhiyong
author_sort Chen, Qingyu
collection PubMed
description BACKGROUND: Capturing sentence semantics plays a vital role in a range of text mining applications. Despite continuous efforts on the development of related datasets and models in the general domain, both datasets and models are limited in biomedical and clinical domains. The BioCreative/OHNLP2018 organizers have made the first attempt to annotate 1068 sentence pairs from clinical notes and have called for a community effort to tackle the Semantic Textual Similarity (BioCreative/OHNLP STS) challenge. METHODS: We developed models using traditional machine learning and deep learning approaches. For the post challenge, we focused on two models: the Random Forest and the Encoder Network. We applied sentence embeddings pre-trained on PubMed abstracts and MIMIC-III clinical notes and updated the Random Forest and the Encoder Network accordingly. RESULTS: The official results demonstrated our best submission was the ensemble of eight models. It achieved a Person correlation coefficient of 0.8328 – the highest performance among 13 submissions from 4 teams. For the post challenge, the performance of both Random Forest and the Encoder Network was improved; in particular, the correlation of the Encoder Network was improved by ~ 13%. During the challenge task, no end-to-end deep learning models had better performance than machine learning models that take manually-crafted features. In contrast, with the sentence embeddings pre-trained on biomedical corpora, the Encoder Network now achieves a correlation of ~ 0.84, which is higher than the original best model. The ensembled model taking the improved versions of the Random Forest and Encoder Network as inputs further increased performance to 0.8528. CONCLUSIONS: Deep learning models with sentence embeddings pre-trained on biomedical corpora achieve the highest performance on the test set. Through error analysis, we find that end-to-end deep learning models and traditional machine learning models with manually-crafted features complement each other by finding different types of sentences. We suggest a combination of these models can better find similar sentences in practice.
format Online
Article
Text
id pubmed-7191680
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-71916802020-05-04 Deep learning with sentence embeddings pre-trained on biomedical corpora improves the performance of finding similar sentences in electronic medical records Chen, Qingyu Du, Jingcheng Kim, Sun Wilbur, W. John Lu, Zhiyong BMC Med Inform Decis Mak Research BACKGROUND: Capturing sentence semantics plays a vital role in a range of text mining applications. Despite continuous efforts on the development of related datasets and models in the general domain, both datasets and models are limited in biomedical and clinical domains. The BioCreative/OHNLP2018 organizers have made the first attempt to annotate 1068 sentence pairs from clinical notes and have called for a community effort to tackle the Semantic Textual Similarity (BioCreative/OHNLP STS) challenge. METHODS: We developed models using traditional machine learning and deep learning approaches. For the post challenge, we focused on two models: the Random Forest and the Encoder Network. We applied sentence embeddings pre-trained on PubMed abstracts and MIMIC-III clinical notes and updated the Random Forest and the Encoder Network accordingly. RESULTS: The official results demonstrated our best submission was the ensemble of eight models. It achieved a Person correlation coefficient of 0.8328 – the highest performance among 13 submissions from 4 teams. For the post challenge, the performance of both Random Forest and the Encoder Network was improved; in particular, the correlation of the Encoder Network was improved by ~ 13%. During the challenge task, no end-to-end deep learning models had better performance than machine learning models that take manually-crafted features. In contrast, with the sentence embeddings pre-trained on biomedical corpora, the Encoder Network now achieves a correlation of ~ 0.84, which is higher than the original best model. The ensembled model taking the improved versions of the Random Forest and Encoder Network as inputs further increased performance to 0.8528. CONCLUSIONS: Deep learning models with sentence embeddings pre-trained on biomedical corpora achieve the highest performance on the test set. Through error analysis, we find that end-to-end deep learning models and traditional machine learning models with manually-crafted features complement each other by finding different types of sentences. We suggest a combination of these models can better find similar sentences in practice. BioMed Central 2020-04-30 /pmc/articles/PMC7191680/ /pubmed/32349758 http://dx.doi.org/10.1186/s12911-020-1044-0 Text en © This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply. 2020 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
spellingShingle Research
Chen, Qingyu
Du, Jingcheng
Kim, Sun
Wilbur, W. John
Lu, Zhiyong
Deep learning with sentence embeddings pre-trained on biomedical corpora improves the performance of finding similar sentences in electronic medical records
title Deep learning with sentence embeddings pre-trained on biomedical corpora improves the performance of finding similar sentences in electronic medical records
title_full Deep learning with sentence embeddings pre-trained on biomedical corpora improves the performance of finding similar sentences in electronic medical records
title_fullStr Deep learning with sentence embeddings pre-trained on biomedical corpora improves the performance of finding similar sentences in electronic medical records
title_full_unstemmed Deep learning with sentence embeddings pre-trained on biomedical corpora improves the performance of finding similar sentences in electronic medical records
title_short Deep learning with sentence embeddings pre-trained on biomedical corpora improves the performance of finding similar sentences in electronic medical records
title_sort deep learning with sentence embeddings pre-trained on biomedical corpora improves the performance of finding similar sentences in electronic medical records
topic Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7191680/
https://www.ncbi.nlm.nih.gov/pubmed/32349758
http://dx.doi.org/10.1186/s12911-020-1044-0
work_keys_str_mv AT chenqingyu deeplearningwithsentenceembeddingspretrainedonbiomedicalcorporaimprovestheperformanceoffindingsimilarsentencesinelectronicmedicalrecords
AT dujingcheng deeplearningwithsentenceembeddingspretrainedonbiomedicalcorporaimprovestheperformanceoffindingsimilarsentencesinelectronicmedicalrecords
AT kimsun deeplearningwithsentenceembeddingspretrainedonbiomedicalcorporaimprovestheperformanceoffindingsimilarsentencesinelectronicmedicalrecords
AT wilburwjohn deeplearningwithsentenceembeddingspretrainedonbiomedicalcorporaimprovestheperformanceoffindingsimilarsentencesinelectronicmedicalrecords
AT luzhiyong deeplearningwithsentenceembeddingspretrainedonbiomedicalcorporaimprovestheperformanceoffindingsimilarsentencesinelectronicmedicalrecords