Cargando…

Clinical concept recognition: Evaluation of existing systems on EHRs

OBJECTIVE: The adoption of electronic health records (EHRs) has produced enormous amounts of data, creating research opportunities in clinical data sciences. Several concept recognition systems have been developed to facilitate clinical information extraction from these data. While studies exist tha...

Descripción completa

Detalles Bibliográficos
Autores principales: Lossio-Ventura, Juan Antonio, Sun, Ran, Boussard, Sebastien, Hernandez-Boussard, Tina
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9880223/
https://www.ncbi.nlm.nih.gov/pubmed/36714202
http://dx.doi.org/10.3389/frai.2022.1051724
_version_ 1784878860828934144
author Lossio-Ventura, Juan Antonio
Sun, Ran
Boussard, Sebastien
Hernandez-Boussard, Tina
author_facet Lossio-Ventura, Juan Antonio
Sun, Ran
Boussard, Sebastien
Hernandez-Boussard, Tina
author_sort Lossio-Ventura, Juan Antonio
collection PubMed
description OBJECTIVE: The adoption of electronic health records (EHRs) has produced enormous amounts of data, creating research opportunities in clinical data sciences. Several concept recognition systems have been developed to facilitate clinical information extraction from these data. While studies exist that compare the performance of many concept recognition systems, they are typically developed internally and may be biased due to different internal implementations, parameters used, and limited number of systems included in the evaluations. The goal of this research is to evaluate the performance of existing systems to retrieve relevant clinical concepts from EHRs. METHODS: We investigated six concept recognition systems, including CLAMP, cTAKES, MetaMap, NCBO Annotator, QuickUMLS, and ScispaCy. Clinical concepts extracted included procedures, disorders, medications, and anatomical location. The system performance was evaluated on two datasets: the 2010 i2b2 and the MIMIC-III. Additionally, we assessed the performance of these systems in five challenging situations, including negation, severity, abbreviation, ambiguity, and misspelling. RESULTS: For clinical concept extraction, CLAMP achieved the best performance on exact and inexact matching, with an F-score of 0.70 and 0.94, respectively, on i2b2; and 0.39 and 0.50, respectively, on MIMIC-III. Across the five challenging situations, ScispaCy excelled in extracting abbreviation information (F-score: 0.86) followed by NCBO Annotator (F-score: 0.79). CLAMP outperformed in extracting severity terms (F-score 0.73) followed by NCBO Annotator (F-score: 0.68). CLAMP outperformed other systems in extracting negated concepts (F-score 0.63). CONCLUSIONS: Several concept recognition systems exist to extract clinical information from unstructured data. This study provides an external evaluation by end-users of six commonly used systems across different extraction tasks. Our findings suggest that CLAMP provides the most comprehensive set of annotations for clinical concept extraction tasks and associated challenges. Comparing standard extraction tasks across systems provides guidance to other clinical researchers when selecting a concept recognition system relevant to their clinical information extraction task.
format Online
Article
Text
id pubmed-9880223
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-98802232023-01-28 Clinical concept recognition: Evaluation of existing systems on EHRs Lossio-Ventura, Juan Antonio Sun, Ran Boussard, Sebastien Hernandez-Boussard, Tina Front Artif Intell Artificial Intelligence OBJECTIVE: The adoption of electronic health records (EHRs) has produced enormous amounts of data, creating research opportunities in clinical data sciences. Several concept recognition systems have been developed to facilitate clinical information extraction from these data. While studies exist that compare the performance of many concept recognition systems, they are typically developed internally and may be biased due to different internal implementations, parameters used, and limited number of systems included in the evaluations. The goal of this research is to evaluate the performance of existing systems to retrieve relevant clinical concepts from EHRs. METHODS: We investigated six concept recognition systems, including CLAMP, cTAKES, MetaMap, NCBO Annotator, QuickUMLS, and ScispaCy. Clinical concepts extracted included procedures, disorders, medications, and anatomical location. The system performance was evaluated on two datasets: the 2010 i2b2 and the MIMIC-III. Additionally, we assessed the performance of these systems in five challenging situations, including negation, severity, abbreviation, ambiguity, and misspelling. RESULTS: For clinical concept extraction, CLAMP achieved the best performance on exact and inexact matching, with an F-score of 0.70 and 0.94, respectively, on i2b2; and 0.39 and 0.50, respectively, on MIMIC-III. Across the five challenging situations, ScispaCy excelled in extracting abbreviation information (F-score: 0.86) followed by NCBO Annotator (F-score: 0.79). CLAMP outperformed in extracting severity terms (F-score 0.73) followed by NCBO Annotator (F-score: 0.68). CLAMP outperformed other systems in extracting negated concepts (F-score 0.63). CONCLUSIONS: Several concept recognition systems exist to extract clinical information from unstructured data. This study provides an external evaluation by end-users of six commonly used systems across different extraction tasks. Our findings suggest that CLAMP provides the most comprehensive set of annotations for clinical concept extraction tasks and associated challenges. Comparing standard extraction tasks across systems provides guidance to other clinical researchers when selecting a concept recognition system relevant to their clinical information extraction task. Frontiers Media S.A. 2023-01-13 /pmc/articles/PMC9880223/ /pubmed/36714202 http://dx.doi.org/10.3389/frai.2022.1051724 Text en Copyright © 2023 Lossio-Ventura, Sun, Boussard and Hernandez-Boussard. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Artificial Intelligence
Lossio-Ventura, Juan Antonio
Sun, Ran
Boussard, Sebastien
Hernandez-Boussard, Tina
Clinical concept recognition: Evaluation of existing systems on EHRs
title Clinical concept recognition: Evaluation of existing systems on EHRs
title_full Clinical concept recognition: Evaluation of existing systems on EHRs
title_fullStr Clinical concept recognition: Evaluation of existing systems on EHRs
title_full_unstemmed Clinical concept recognition: Evaluation of existing systems on EHRs
title_short Clinical concept recognition: Evaluation of existing systems on EHRs
title_sort clinical concept recognition: evaluation of existing systems on ehrs
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9880223/
https://www.ncbi.nlm.nih.gov/pubmed/36714202
http://dx.doi.org/10.3389/frai.2022.1051724
work_keys_str_mv AT lossioventurajuanantonio clinicalconceptrecognitionevaluationofexistingsystemsonehrs
AT sunran clinicalconceptrecognitionevaluationofexistingsystemsonehrs
AT boussardsebastien clinicalconceptrecognitionevaluationofexistingsystemsonehrs
AT hernandezboussardtina clinicalconceptrecognitionevaluationofexistingsystemsonehrs