Cargando…

Predicting Clinical Events Based on Raw Text: From Bag-of-Words to Attention-Based Transformers

Identifying which patients are at higher risks of dying or being re-admitted often happens to be resource- and life- saving, thus is a very important and challenging task for healthcare text analytics. While many successful approaches exist to predict such clinical events based on categorical and nu...

Descripción completa

Detalles Bibliográficos
Autores principales: Roussinov, Dmitri, Conkie, Andrew, Patterson, Andrew, Sainsbury, Christopher
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8899014/
https://www.ncbi.nlm.nih.gov/pubmed/35265939
http://dx.doi.org/10.3389/fdgth.2021.810260
_version_ 1784663807795134464
author Roussinov, Dmitri
Conkie, Andrew
Patterson, Andrew
Sainsbury, Christopher
author_facet Roussinov, Dmitri
Conkie, Andrew
Patterson, Andrew
Sainsbury, Christopher
author_sort Roussinov, Dmitri
collection PubMed
description Identifying which patients are at higher risks of dying or being re-admitted often happens to be resource- and life- saving, thus is a very important and challenging task for healthcare text analytics. While many successful approaches exist to predict such clinical events based on categorical and numerical variables, a large amount of health records exists in the format of raw text such as clinical notes or discharge summaries. However, the text-analytics models applied to free-form natural language found in those notes are lagging behind the break-throughs happening in the other domains and remain to be primarily based on older bag-of-words technologies. As a result, they rarely reach the accuracy level acceptable for the clinicians. In spite of their success in other domains, the superiority of deep neural approaches over classical bags of words for this task has not yet been convincingly demonstrated. Also, while some successful experiments have been reported, the most recent break-throughs due to the pre-trained language models have not yet made their ways into the medical domain. Using a publicly available healthcare dataset, we have explored several classification models to predict patients' re-admission or a fatality based on their discharge summaries and established that 1) The performance of the neural models used in our experiments convincingly exceeds those based on bag-of-words by several percentage points as measured by the standard metrics. 2) This allows us to achieve the accuracy typically acceptable by the clinicians as of practical use (area under the ROC curve above 0.70) for the majority of our prediction targets. 3) While the pre-trained attention-based transformer performed only on par with the model that averages word embeddings when applied to full length discharge summaries, the transformer still handles shorter text segments substantially better, at times with the margin of 0.04 in the area under the ROC curve. Thus, our findings extend the success of pre-trained language models reported in other domains to the task of clinical event prediction, and likely to other text-classification tasks in the healthcare analytics domain. 4) We suggest several models to overcome the transformers' major drawback (their input size limitation), and confirm that this is crucial to achieve their top performance. Our modifications are domain agnostic, and thus can be applied in other applications where the text inputs exceed 200 words. 5) We have successfully demonstrated how non-text attributes (such as patient age, demographics, type of admission etc.) can be combined with text to gain additional improvements for several prediction targets. We include extensive ablation studies showing the impact of the training size, and highlighting the tradeoffs between the performance and the resources needed.
format Online
Article
Text
id pubmed-8899014
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-88990142022-03-08 Predicting Clinical Events Based on Raw Text: From Bag-of-Words to Attention-Based Transformers Roussinov, Dmitri Conkie, Andrew Patterson, Andrew Sainsbury, Christopher Front Digit Health Digital Health Identifying which patients are at higher risks of dying or being re-admitted often happens to be resource- and life- saving, thus is a very important and challenging task for healthcare text analytics. While many successful approaches exist to predict such clinical events based on categorical and numerical variables, a large amount of health records exists in the format of raw text such as clinical notes or discharge summaries. However, the text-analytics models applied to free-form natural language found in those notes are lagging behind the break-throughs happening in the other domains and remain to be primarily based on older bag-of-words technologies. As a result, they rarely reach the accuracy level acceptable for the clinicians. In spite of their success in other domains, the superiority of deep neural approaches over classical bags of words for this task has not yet been convincingly demonstrated. Also, while some successful experiments have been reported, the most recent break-throughs due to the pre-trained language models have not yet made their ways into the medical domain. Using a publicly available healthcare dataset, we have explored several classification models to predict patients' re-admission or a fatality based on their discharge summaries and established that 1) The performance of the neural models used in our experiments convincingly exceeds those based on bag-of-words by several percentage points as measured by the standard metrics. 2) This allows us to achieve the accuracy typically acceptable by the clinicians as of practical use (area under the ROC curve above 0.70) for the majority of our prediction targets. 3) While the pre-trained attention-based transformer performed only on par with the model that averages word embeddings when applied to full length discharge summaries, the transformer still handles shorter text segments substantially better, at times with the margin of 0.04 in the area under the ROC curve. Thus, our findings extend the success of pre-trained language models reported in other domains to the task of clinical event prediction, and likely to other text-classification tasks in the healthcare analytics domain. 4) We suggest several models to overcome the transformers' major drawback (their input size limitation), and confirm that this is crucial to achieve their top performance. Our modifications are domain agnostic, and thus can be applied in other applications where the text inputs exceed 200 words. 5) We have successfully demonstrated how non-text attributes (such as patient age, demographics, type of admission etc.) can be combined with text to gain additional improvements for several prediction targets. We include extensive ablation studies showing the impact of the training size, and highlighting the tradeoffs between the performance and the resources needed. Frontiers Media S.A. 2022-02-21 /pmc/articles/PMC8899014/ /pubmed/35265939 http://dx.doi.org/10.3389/fdgth.2021.810260 Text en Copyright © 2022 Roussinov, Conkie, Patterson and Sainsbury. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Digital Health
Roussinov, Dmitri
Conkie, Andrew
Patterson, Andrew
Sainsbury, Christopher
Predicting Clinical Events Based on Raw Text: From Bag-of-Words to Attention-Based Transformers
title Predicting Clinical Events Based on Raw Text: From Bag-of-Words to Attention-Based Transformers
title_full Predicting Clinical Events Based on Raw Text: From Bag-of-Words to Attention-Based Transformers
title_fullStr Predicting Clinical Events Based on Raw Text: From Bag-of-Words to Attention-Based Transformers
title_full_unstemmed Predicting Clinical Events Based on Raw Text: From Bag-of-Words to Attention-Based Transformers
title_short Predicting Clinical Events Based on Raw Text: From Bag-of-Words to Attention-Based Transformers
title_sort predicting clinical events based on raw text: from bag-of-words to attention-based transformers
topic Digital Health
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8899014/
https://www.ncbi.nlm.nih.gov/pubmed/35265939
http://dx.doi.org/10.3389/fdgth.2021.810260
work_keys_str_mv AT roussinovdmitri predictingclinicaleventsbasedonrawtextfrombagofwordstoattentionbasedtransformers
AT conkieandrew predictingclinicaleventsbasedonrawtextfrombagofwordstoattentionbasedtransformers
AT pattersonandrew predictingclinicaleventsbasedonrawtextfrombagofwordstoattentionbasedtransformers
AT sainsburychristopher predictingclinicaleventsbasedonrawtextfrombagofwordstoattentionbasedtransformers