Cargando…
448. Can Electronic Clinical Notes Identify Travelers with Zika?
BACKGROUND: Travel history can help differentiate a public health emergency from a travel-related infection by providing information on exposure but such information is often available only in unstructured clinical documents. We explored the feasibility extracting these mentions from the electronic...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Oxford University Press
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6255520/ http://dx.doi.org/10.1093/ofid/ofy210.457 |
_version_ | 1783373960199536640 |
---|---|
author | Peterson, Kelly Denhalter, Daniel Patterson, Olga V Jones, Makoto |
author_facet | Peterson, Kelly Denhalter, Daniel Patterson, Olga V Jones, Makoto |
author_sort | Peterson, Kelly |
collection | PubMed |
description | BACKGROUND: Travel history can help differentiate a public health emergency from a travel-related infection by providing information on exposure but such information is often available only in unstructured clinical documents. We explored the feasibility extracting these mentions from the electronic health record in an automated fashion. METHODS: As a collaboration with the National Biosurveillance Integration Center (NBIC), clinical notes were extracted from patient encounters with Zika, dengue and chikungunya virus testing in the Department of Veterans Affairs (VA; a large healthcare system providing care in its facilities from Puerto Rico to the Philippines) between January 1, 2015 and February 28, 2016. From a corpus of 250,133 notes, we gathered a collection of 4,584 unique snippets by an automated bootstrapping process to identify documents containing potentially relevant information using phrases and travel locations. After establishing a guideline, snippets were manually annotated for travel affirmation and locations visited (see Figure 1). Using machine learning including a neural language model, snippets were used to train a Conditional Random Field (CRF) model to extract affirmed travel locations outside of the continental United States. We did not extract the time of travel. RESULTS: Of annotated snippets, 2,659 (58%) contained an affirmed mention of travel history whereas 347 (7.6%) were negated. An inter-rater reliability (IRR) analysis resulted in an agreement of 89% and an associated kappa-coefficient of 0.65. Analysis of annotated snippets resulted in 551 unique location strings identified (see Figure 2). On a held out test set of 459 snippets (10%), the machine learning model achieved performance metrics of 85.6% positive predictive value and 76.7% sensitivity. The algorithm now runs daily and is being evaluated for prospective use (see Figure 3). CONCLUSION: Targeted travel history extraction is feasible in a large medical system with acceptable accuracy. Our approach was able to extract novel places that would not necessarily be found in a curated list (e.g., Mexican Riviera). Further research could improve accuracy and could incorporate this into models improving the early detection of autochthonous transmission. [Image: see text] [Image: see text] [Image: see text] DISCLOSURES: All authors: No reported disclosures. |
format | Online Article Text |
id | pubmed-6255520 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2018 |
publisher | Oxford University Press |
record_format | MEDLINE/PubMed |
spelling | pubmed-62555202018-11-28 448. Can Electronic Clinical Notes Identify Travelers with Zika? Peterson, Kelly Denhalter, Daniel Patterson, Olga V Jones, Makoto Open Forum Infect Dis Abstracts BACKGROUND: Travel history can help differentiate a public health emergency from a travel-related infection by providing information on exposure but such information is often available only in unstructured clinical documents. We explored the feasibility extracting these mentions from the electronic health record in an automated fashion. METHODS: As a collaboration with the National Biosurveillance Integration Center (NBIC), clinical notes were extracted from patient encounters with Zika, dengue and chikungunya virus testing in the Department of Veterans Affairs (VA; a large healthcare system providing care in its facilities from Puerto Rico to the Philippines) between January 1, 2015 and February 28, 2016. From a corpus of 250,133 notes, we gathered a collection of 4,584 unique snippets by an automated bootstrapping process to identify documents containing potentially relevant information using phrases and travel locations. After establishing a guideline, snippets were manually annotated for travel affirmation and locations visited (see Figure 1). Using machine learning including a neural language model, snippets were used to train a Conditional Random Field (CRF) model to extract affirmed travel locations outside of the continental United States. We did not extract the time of travel. RESULTS: Of annotated snippets, 2,659 (58%) contained an affirmed mention of travel history whereas 347 (7.6%) were negated. An inter-rater reliability (IRR) analysis resulted in an agreement of 89% and an associated kappa-coefficient of 0.65. Analysis of annotated snippets resulted in 551 unique location strings identified (see Figure 2). On a held out test set of 459 snippets (10%), the machine learning model achieved performance metrics of 85.6% positive predictive value and 76.7% sensitivity. The algorithm now runs daily and is being evaluated for prospective use (see Figure 3). CONCLUSION: Targeted travel history extraction is feasible in a large medical system with acceptable accuracy. Our approach was able to extract novel places that would not necessarily be found in a curated list (e.g., Mexican Riviera). Further research could improve accuracy and could incorporate this into models improving the early detection of autochthonous transmission. [Image: see text] [Image: see text] [Image: see text] DISCLOSURES: All authors: No reported disclosures. Oxford University Press 2018-11-26 /pmc/articles/PMC6255520/ http://dx.doi.org/10.1093/ofid/ofy210.457 Text en © The Author(s) 2018. Published by Oxford University Press on behalf of Infectious Diseases Society of America. http://creativecommons.org/licenses/by-nc-nd/4.0/ This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial reproduction and distribution of the work, in any medium, provided the original work is not altered or transformed in any way, and that the work is properly cited. For commercial re-use, please contact journals.permissions@oup.com |
spellingShingle | Abstracts Peterson, Kelly Denhalter, Daniel Patterson, Olga V Jones, Makoto 448. Can Electronic Clinical Notes Identify Travelers with Zika? |
title | 448. Can Electronic Clinical Notes Identify Travelers with Zika? |
title_full | 448. Can Electronic Clinical Notes Identify Travelers with Zika? |
title_fullStr | 448. Can Electronic Clinical Notes Identify Travelers with Zika? |
title_full_unstemmed | 448. Can Electronic Clinical Notes Identify Travelers with Zika? |
title_short | 448. Can Electronic Clinical Notes Identify Travelers with Zika? |
title_sort | 448. can electronic clinical notes identify travelers with zika? |
topic | Abstracts |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6255520/ http://dx.doi.org/10.1093/ofid/ofy210.457 |
work_keys_str_mv | AT petersonkelly 448canelectronicclinicalnotesidentifytravelerswithzika AT denhalterdaniel 448canelectronicclinicalnotesidentifytravelerswithzika AT pattersonolgav 448canelectronicclinicalnotesidentifytravelerswithzika AT jonesmakoto 448canelectronicclinicalnotesidentifytravelerswithzika |