Cargando…

No means ‘No’: a non-improper modeling approach, with embedded speculative context

MOTIVATION: The medical data are complex in nature as terms that appear in records usually appear in different contexts. Through this article, we investigate various bio model’s embeddings (BioBERT, BioELECTRA and PubMedBERT) on their understanding of ‘negation and speculation context’ wherein we fo...

Descripción completa

Detalles Bibliográficos
Autores principales: Tiwary, Priya, Madhubalan, Akshayraj, Gautam, Amit
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Oxford University Press 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9563701/
https://www.ncbi.nlm.nih.gov/pubmed/36040145
http://dx.doi.org/10.1093/bioinformatics/btac593
_version_ 1784808466212192256
author Tiwary, Priya
Madhubalan, Akshayraj
Gautam, Amit
author_facet Tiwary, Priya
Madhubalan, Akshayraj
Gautam, Amit
author_sort Tiwary, Priya
collection PubMed
description MOTIVATION: The medical data are complex in nature as terms that appear in records usually appear in different contexts. Through this article, we investigate various bio model’s embeddings (BioBERT, BioELECTRA and PubMedBERT) on their understanding of ‘negation and speculation context’ wherein we found that these models were unable to differentiate ‘negated context’ versus ‘non-negated context’. To measure the understanding of models, we used cosine similarity scores of negated sentence embeddings versus non-negated sentence embeddings pairs. For improving these models, we introduce a generic super tuning approach to enhance the embeddings on ‘negation and speculation context’ by utilizing a synthesized dataset. RESULTS: After super-tuning the models, we can see that the model’s embeddings are now understanding negative and speculative contexts much better. Furthermore, we fine-tuned the super-tuned models on various tasks and we found that the model has outperformed the previous models and achieved state-of-the-art on negation, speculation cue and scope detection tasks on BioScope abstracts and Sherlock dataset. We also confirmed that our approach had a very minimal trade-off in the performance of the model in other tasks like natural language inference after super-tuning. AVAILABILITY AND IMPLEMENTATION: The source code, data and the models are available at: https://github.com/comprehend/engg-ai-research/tree/uncertainty-super-tuning.
format Online
Article
Text
id pubmed-9563701
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Oxford University Press
record_format MEDLINE/PubMed
spelling pubmed-95637012022-10-18 No means ‘No’: a non-improper modeling approach, with embedded speculative context Tiwary, Priya Madhubalan, Akshayraj Gautam, Amit Bioinformatics Original Papers MOTIVATION: The medical data are complex in nature as terms that appear in records usually appear in different contexts. Through this article, we investigate various bio model’s embeddings (BioBERT, BioELECTRA and PubMedBERT) on their understanding of ‘negation and speculation context’ wherein we found that these models were unable to differentiate ‘negated context’ versus ‘non-negated context’. To measure the understanding of models, we used cosine similarity scores of negated sentence embeddings versus non-negated sentence embeddings pairs. For improving these models, we introduce a generic super tuning approach to enhance the embeddings on ‘negation and speculation context’ by utilizing a synthesized dataset. RESULTS: After super-tuning the models, we can see that the model’s embeddings are now understanding negative and speculative contexts much better. Furthermore, we fine-tuned the super-tuned models on various tasks and we found that the model has outperformed the previous models and achieved state-of-the-art on negation, speculation cue and scope detection tasks on BioScope abstracts and Sherlock dataset. We also confirmed that our approach had a very minimal trade-off in the performance of the model in other tasks like natural language inference after super-tuning. AVAILABILITY AND IMPLEMENTATION: The source code, data and the models are available at: https://github.com/comprehend/engg-ai-research/tree/uncertainty-super-tuning. Oxford University Press 2022-08-30 /pmc/articles/PMC9563701/ /pubmed/36040145 http://dx.doi.org/10.1093/bioinformatics/btac593 Text en © The Author(s) 2022. Published by Oxford University Press. https://creativecommons.org/licenses/by/4.0/This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Original Papers
Tiwary, Priya
Madhubalan, Akshayraj
Gautam, Amit
No means ‘No’: a non-improper modeling approach, with embedded speculative context
title No means ‘No’: a non-improper modeling approach, with embedded speculative context
title_full No means ‘No’: a non-improper modeling approach, with embedded speculative context
title_fullStr No means ‘No’: a non-improper modeling approach, with embedded speculative context
title_full_unstemmed No means ‘No’: a non-improper modeling approach, with embedded speculative context
title_short No means ‘No’: a non-improper modeling approach, with embedded speculative context
title_sort no means ‘no’: a non-improper modeling approach, with embedded speculative context
topic Original Papers
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9563701/
https://www.ncbi.nlm.nih.gov/pubmed/36040145
http://dx.doi.org/10.1093/bioinformatics/btac593
work_keys_str_mv AT tiwarypriya nomeansnoanonimpropermodelingapproachwithembeddedspeculativecontext
AT madhubalanakshayraj nomeansnoanonimpropermodelingapproachwithembeddedspeculativecontext
AT gautamamit nomeansnoanonimpropermodelingapproachwithembeddedspeculativecontext