Cargando…

Evaluation of a Concept Mapping Task Using Named Entity Recognition and Normalization in Unstructured Clinical Text

In this pilot study, we explore the feasibility and accuracy of using a query in a commercial natural language processing engine in a named entity recognition and normalization task to extract a wide spectrum of clinical concepts from free text clinical letters. Editorial guidance developed by two i...

Descripción completa

Detalles Bibliográficos
Autores principales: Trivedi, Sapna, Gildersleeve, Roger, Franco, Sandra, Kanter, Andrew S., Chaudhry, Afzal
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer International Publishing 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8982815/
https://www.ncbi.nlm.nih.gov/pubmed/35415451
http://dx.doi.org/10.1007/s41666-020-00079-z
_version_ 1784681866940383232
author Trivedi, Sapna
Gildersleeve, Roger
Franco, Sandra
Kanter, Andrew S.
Chaudhry, Afzal
author_facet Trivedi, Sapna
Gildersleeve, Roger
Franco, Sandra
Kanter, Andrew S.
Chaudhry, Afzal
author_sort Trivedi, Sapna
collection PubMed
description In this pilot study, we explore the feasibility and accuracy of using a query in a commercial natural language processing engine in a named entity recognition and normalization task to extract a wide spectrum of clinical concepts from free text clinical letters. Editorial guidance developed by two independent clinicians was used to annotate sixty anonymized clinic letters to create the gold standard. Concepts were categorized by semantic type, and labels were applied to indicate contextual attributes such as negation. The natural language processing (NLP) engine was Linguamatics I2E version 5.3.1, equipped with an algorithm for contextualizing words and phrases and an ontology of terms from Intelligent Medical Objects to which those tokens were mapped. Performance of the engine was assessed on a training set of the documents using precision, recall, and the F1 score, with subset analysis for semantic type, accurate negation, exact versus partial conceptual matching, and discontinuous text. The engine underwent tuning, and the final performance was determined for a test set. The test set showed an F1 score of 0.81 and 0.84 using strict and relaxed criteria respectively when appropriate negation was not required and 0.75 and 0.77 when it was. F1 scores were higher when concepts were derived from continuous text only. This pilot study showed that a commercially available NLP engine delivered good overall results for identifying a wide spectrum of structured clinical concepts. Such a system holds promise for extracting concepts from free text to populate problem lists or for data mining projects. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.1007/s41666-020-00079-z) contains supplementary material, which is available to authorized users.
format Online
Article
Text
id pubmed-8982815
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Springer International Publishing
record_format MEDLINE/PubMed
spelling pubmed-89828152022-04-11 Evaluation of a Concept Mapping Task Using Named Entity Recognition and Normalization in Unstructured Clinical Text Trivedi, Sapna Gildersleeve, Roger Franco, Sandra Kanter, Andrew S. Chaudhry, Afzal J Healthc Inform Res Research Article In this pilot study, we explore the feasibility and accuracy of using a query in a commercial natural language processing engine in a named entity recognition and normalization task to extract a wide spectrum of clinical concepts from free text clinical letters. Editorial guidance developed by two independent clinicians was used to annotate sixty anonymized clinic letters to create the gold standard. Concepts were categorized by semantic type, and labels were applied to indicate contextual attributes such as negation. The natural language processing (NLP) engine was Linguamatics I2E version 5.3.1, equipped with an algorithm for contextualizing words and phrases and an ontology of terms from Intelligent Medical Objects to which those tokens were mapped. Performance of the engine was assessed on a training set of the documents using precision, recall, and the F1 score, with subset analysis for semantic type, accurate negation, exact versus partial conceptual matching, and discontinuous text. The engine underwent tuning, and the final performance was determined for a test set. The test set showed an F1 score of 0.81 and 0.84 using strict and relaxed criteria respectively when appropriate negation was not required and 0.75 and 0.77 when it was. F1 scores were higher when concepts were derived from continuous text only. This pilot study showed that a commercially available NLP engine delivered good overall results for identifying a wide spectrum of structured clinical concepts. Such a system holds promise for extracting concepts from free text to populate problem lists or for data mining projects. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.1007/s41666-020-00079-z) contains supplementary material, which is available to authorized users. Springer International Publishing 2020-10-16 /pmc/articles/PMC8982815/ /pubmed/35415451 http://dx.doi.org/10.1007/s41666-020-00079-z Text en © The Author(s) 2020 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Research Article
Trivedi, Sapna
Gildersleeve, Roger
Franco, Sandra
Kanter, Andrew S.
Chaudhry, Afzal
Evaluation of a Concept Mapping Task Using Named Entity Recognition and Normalization in Unstructured Clinical Text
title Evaluation of a Concept Mapping Task Using Named Entity Recognition and Normalization in Unstructured Clinical Text
title_full Evaluation of a Concept Mapping Task Using Named Entity Recognition and Normalization in Unstructured Clinical Text
title_fullStr Evaluation of a Concept Mapping Task Using Named Entity Recognition and Normalization in Unstructured Clinical Text
title_full_unstemmed Evaluation of a Concept Mapping Task Using Named Entity Recognition and Normalization in Unstructured Clinical Text
title_short Evaluation of a Concept Mapping Task Using Named Entity Recognition and Normalization in Unstructured Clinical Text
title_sort evaluation of a concept mapping task using named entity recognition and normalization in unstructured clinical text
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8982815/
https://www.ncbi.nlm.nih.gov/pubmed/35415451
http://dx.doi.org/10.1007/s41666-020-00079-z
work_keys_str_mv AT trivedisapna evaluationofaconceptmappingtaskusingnamedentityrecognitionandnormalizationinunstructuredclinicaltext
AT gildersleeveroger evaluationofaconceptmappingtaskusingnamedentityrecognitionandnormalizationinunstructuredclinicaltext
AT francosandra evaluationofaconceptmappingtaskusingnamedentityrecognitionandnormalizationinunstructuredclinicaltext
AT kanterandrews evaluationofaconceptmappingtaskusingnamedentityrecognitionandnormalizationinunstructuredclinicaltext
AT chaudhryafzal evaluationofaconceptmappingtaskusingnamedentityrecognitionandnormalizationinunstructuredclinicaltext