Cargando…
Disambiguating Clinical Abbreviations Using a One-Fits-All Classifier Based on Deep Learning Techniques
Background Abbreviations are considered an essential part of the clinical narrative; they are used not only to save time and space but also to hide serious or incurable illnesses. Misreckoning interpretation of the clinical abbreviations could affect different aspects concerning patients themselves...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Georg Thieme Verlag KG
2022
|
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9246508/ https://www.ncbi.nlm.nih.gov/pubmed/35104909 http://dx.doi.org/10.1055/s-0042-1742388 |
_version_ | 1784738984088305664 |
---|---|
author | Jaber, Areej Martínez, Paloma |
author_facet | Jaber, Areej Martínez, Paloma |
author_sort | Jaber, Areej |
collection | PubMed |
description | Background Abbreviations are considered an essential part of the clinical narrative; they are used not only to save time and space but also to hide serious or incurable illnesses. Misreckoning interpretation of the clinical abbreviations could affect different aspects concerning patients themselves or other services like clinical support systems. There is no consensus in the scientific community to create new abbreviations, making it difficult to understand them. Disambiguate clinical abbreviations aim to predict the exact meaning of the abbreviation based on context, a crucial step in understanding clinical notes. Objectives Disambiguating clinical abbreviations is an essential task in information extraction from medical texts. Deep contextualized representations models showed promising results in most word sense disambiguation tasks. In this work, we propose a one-fits-all classifier to disambiguate clinical abbreviations with deep contextualized representation from pretrained language models like Bidirectional Encoder Representation from Transformers (BERT). Methods A set of experiments with different pretrained clinical BERT models were performed to investigate fine-tuning methods on the disambiguation of clinical abbreviations. One-fits-all classifiers were used to improve disambiguating rare clinical abbreviations. Results One-fits-all classifiers with deep contextualized representations from Bioclinical, BlueBERT, and MS_BERT pretrained models improved the accuracy using the University of Minnesota data set. The model achieved 98.99, 98.75, and 99.13%, respectively. All the models outperform the state-of-the-art in the previous work of around 98.39%, with the best accuracy using the MS_BERT model. Conclusion Deep contextualized representations via fine-tuning of pretrained language modeling proved its sufficiency on disambiguating clinical abbreviations; it could be robust for rare and unseen abbreviations and has the advantage of avoiding building a separate classifier for each abbreviation. Transfer learning can improve the development of practical abbreviation disambiguation systems. |
format | Online Article Text |
id | pubmed-9246508 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Georg Thieme Verlag KG |
record_format | MEDLINE/PubMed |
spelling | pubmed-92465082022-07-01 Disambiguating Clinical Abbreviations Using a One-Fits-All Classifier Based on Deep Learning Techniques Jaber, Areej Martínez, Paloma Methods Inf Med Background Abbreviations are considered an essential part of the clinical narrative; they are used not only to save time and space but also to hide serious or incurable illnesses. Misreckoning interpretation of the clinical abbreviations could affect different aspects concerning patients themselves or other services like clinical support systems. There is no consensus in the scientific community to create new abbreviations, making it difficult to understand them. Disambiguate clinical abbreviations aim to predict the exact meaning of the abbreviation based on context, a crucial step in understanding clinical notes. Objectives Disambiguating clinical abbreviations is an essential task in information extraction from medical texts. Deep contextualized representations models showed promising results in most word sense disambiguation tasks. In this work, we propose a one-fits-all classifier to disambiguate clinical abbreviations with deep contextualized representation from pretrained language models like Bidirectional Encoder Representation from Transformers (BERT). Methods A set of experiments with different pretrained clinical BERT models were performed to investigate fine-tuning methods on the disambiguation of clinical abbreviations. One-fits-all classifiers were used to improve disambiguating rare clinical abbreviations. Results One-fits-all classifiers with deep contextualized representations from Bioclinical, BlueBERT, and MS_BERT pretrained models improved the accuracy using the University of Minnesota data set. The model achieved 98.99, 98.75, and 99.13%, respectively. All the models outperform the state-of-the-art in the previous work of around 98.39%, with the best accuracy using the MS_BERT model. Conclusion Deep contextualized representations via fine-tuning of pretrained language modeling proved its sufficiency on disambiguating clinical abbreviations; it could be robust for rare and unseen abbreviations and has the advantage of avoiding building a separate classifier for each abbreviation. Transfer learning can improve the development of practical abbreviation disambiguation systems. Georg Thieme Verlag KG 2022-02-01 /pmc/articles/PMC9246508/ /pubmed/35104909 http://dx.doi.org/10.1055/s-0042-1742388 Text en The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. ( https://creativecommons.org/licenses/by-nc-nd/4.0/ ) https://creativecommons.org/licenses/by-nc-nd/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License, which permits unrestricted reproduction and distribution, for non-commercial purposes only; and use and reproduction, but not distribution, of adapted material for non-commercial purposes only, provided the original work is properly cited. |
spellingShingle | Jaber, Areej Martínez, Paloma Disambiguating Clinical Abbreviations Using a One-Fits-All Classifier Based on Deep Learning Techniques |
title | Disambiguating Clinical Abbreviations Using a One-Fits-All Classifier Based on Deep Learning Techniques |
title_full | Disambiguating Clinical Abbreviations Using a One-Fits-All Classifier Based on Deep Learning Techniques |
title_fullStr | Disambiguating Clinical Abbreviations Using a One-Fits-All Classifier Based on Deep Learning Techniques |
title_full_unstemmed | Disambiguating Clinical Abbreviations Using a One-Fits-All Classifier Based on Deep Learning Techniques |
title_short | Disambiguating Clinical Abbreviations Using a One-Fits-All Classifier Based on Deep Learning Techniques |
title_sort | disambiguating clinical abbreviations using a one-fits-all classifier based on deep learning techniques |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9246508/ https://www.ncbi.nlm.nih.gov/pubmed/35104909 http://dx.doi.org/10.1055/s-0042-1742388 |
work_keys_str_mv | AT jaberareej disambiguatingclinicalabbreviationsusingaonefitsallclassifierbasedondeeplearningtechniques AT martinezpaloma disambiguatingclinicalabbreviationsusingaonefitsallclassifierbasedondeeplearningtechniques |