Cargando…

Using Visual Patient to Show Vital Sign Predictions, a Computer-Based Mixed Quantitative and Qualitative Simulation Study

Background: Machine learning can analyze vast amounts of data and make predictions for events in the future. Our group created machine learning models for vital sign predictions. To transport the information of these predictions without numbers and numerical values and make them easily usable for hu...

Descripción completa

Detalles Bibliográficos
Autores principales: Malorgio, Amos, Henckert, David, Schweiger, Giovanna, Braun, Julia, Zacharowski, Kai, Raimann, Florian J., Piekarski, Florian, Meybohm, Patrick, Hottenrott, Sebastian, Froehlich, Corinna, Spahn, Donat R., Noethiger, Christoph B., Tscholl, David W., Roche, Tadzio R.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10606017/
https://www.ncbi.nlm.nih.gov/pubmed/37892102
http://dx.doi.org/10.3390/diagnostics13203281
_version_ 1785127214210088960
author Malorgio, Amos
Henckert, David
Schweiger, Giovanna
Braun, Julia
Zacharowski, Kai
Raimann, Florian J.
Piekarski, Florian
Meybohm, Patrick
Hottenrott, Sebastian
Froehlich, Corinna
Spahn, Donat R.
Noethiger, Christoph B.
Tscholl, David W.
Roche, Tadzio R.
author_facet Malorgio, Amos
Henckert, David
Schweiger, Giovanna
Braun, Julia
Zacharowski, Kai
Raimann, Florian J.
Piekarski, Florian
Meybohm, Patrick
Hottenrott, Sebastian
Froehlich, Corinna
Spahn, Donat R.
Noethiger, Christoph B.
Tscholl, David W.
Roche, Tadzio R.
author_sort Malorgio, Amos
collection PubMed
description Background: Machine learning can analyze vast amounts of data and make predictions for events in the future. Our group created machine learning models for vital sign predictions. To transport the information of these predictions without numbers and numerical values and make them easily usable for human caregivers, we aimed to integrate them into the Philips Visual-Patient-avatar, an avatar-based visualization of patient monitoring. Methods: We conducted a computer-based simulation study with 70 participants in 3 European university hospitals. We validated the vital sign prediction visualizations by testing their identification by anesthesiologists and intensivists. Each prediction visualization consisted of a condition (e.g., low blood pressure) and an urgency (a visual indication of the timespan in which the condition is expected to occur). To obtain qualitative user feedback, we also conducted standardized interviews and derived statements that participants later rated in an online survey. Results: The mixed logistic regression model showed 77.9% (95% CI 73.2–82.0%) correct identification of prediction visualizations (i.e., condition and urgency both correctly identified) and 93.8% (95% CI 93.7–93.8%) for conditions only (i.e., without considering urgencies). A total of 49 out of 70 participants completed the online survey. The online survey participants agreed that the prediction visualizations were fun to use (32/49, 65.3%), and that they could imagine working with them in the future (30/49, 61.2%). They also agreed that identifying the urgencies was difficult (32/49, 65.3%). Conclusions: This study found that care providers correctly identified >90% of the conditions (i.e., without considering urgencies). The accuracy of identification decreased when considering urgencies in addition to conditions. Therefore, in future development of the technology, we will focus on either only displaying conditions (without urgencies) or improving the visualizations of urgency to enhance usability for human users.
format Online
Article
Text
id pubmed-10606017
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-106060172023-10-28 Using Visual Patient to Show Vital Sign Predictions, a Computer-Based Mixed Quantitative and Qualitative Simulation Study Malorgio, Amos Henckert, David Schweiger, Giovanna Braun, Julia Zacharowski, Kai Raimann, Florian J. Piekarski, Florian Meybohm, Patrick Hottenrott, Sebastian Froehlich, Corinna Spahn, Donat R. Noethiger, Christoph B. Tscholl, David W. Roche, Tadzio R. Diagnostics (Basel) Article Background: Machine learning can analyze vast amounts of data and make predictions for events in the future. Our group created machine learning models for vital sign predictions. To transport the information of these predictions without numbers and numerical values and make them easily usable for human caregivers, we aimed to integrate them into the Philips Visual-Patient-avatar, an avatar-based visualization of patient monitoring. Methods: We conducted a computer-based simulation study with 70 participants in 3 European university hospitals. We validated the vital sign prediction visualizations by testing their identification by anesthesiologists and intensivists. Each prediction visualization consisted of a condition (e.g., low blood pressure) and an urgency (a visual indication of the timespan in which the condition is expected to occur). To obtain qualitative user feedback, we also conducted standardized interviews and derived statements that participants later rated in an online survey. Results: The mixed logistic regression model showed 77.9% (95% CI 73.2–82.0%) correct identification of prediction visualizations (i.e., condition and urgency both correctly identified) and 93.8% (95% CI 93.7–93.8%) for conditions only (i.e., without considering urgencies). A total of 49 out of 70 participants completed the online survey. The online survey participants agreed that the prediction visualizations were fun to use (32/49, 65.3%), and that they could imagine working with them in the future (30/49, 61.2%). They also agreed that identifying the urgencies was difficult (32/49, 65.3%). Conclusions: This study found that care providers correctly identified >90% of the conditions (i.e., without considering urgencies). The accuracy of identification decreased when considering urgencies in addition to conditions. Therefore, in future development of the technology, we will focus on either only displaying conditions (without urgencies) or improving the visualizations of urgency to enhance usability for human users. MDPI 2023-10-23 /pmc/articles/PMC10606017/ /pubmed/37892102 http://dx.doi.org/10.3390/diagnostics13203281 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Malorgio, Amos
Henckert, David
Schweiger, Giovanna
Braun, Julia
Zacharowski, Kai
Raimann, Florian J.
Piekarski, Florian
Meybohm, Patrick
Hottenrott, Sebastian
Froehlich, Corinna
Spahn, Donat R.
Noethiger, Christoph B.
Tscholl, David W.
Roche, Tadzio R.
Using Visual Patient to Show Vital Sign Predictions, a Computer-Based Mixed Quantitative and Qualitative Simulation Study
title Using Visual Patient to Show Vital Sign Predictions, a Computer-Based Mixed Quantitative and Qualitative Simulation Study
title_full Using Visual Patient to Show Vital Sign Predictions, a Computer-Based Mixed Quantitative and Qualitative Simulation Study
title_fullStr Using Visual Patient to Show Vital Sign Predictions, a Computer-Based Mixed Quantitative and Qualitative Simulation Study
title_full_unstemmed Using Visual Patient to Show Vital Sign Predictions, a Computer-Based Mixed Quantitative and Qualitative Simulation Study
title_short Using Visual Patient to Show Vital Sign Predictions, a Computer-Based Mixed Quantitative and Qualitative Simulation Study
title_sort using visual patient to show vital sign predictions, a computer-based mixed quantitative and qualitative simulation study
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10606017/
https://www.ncbi.nlm.nih.gov/pubmed/37892102
http://dx.doi.org/10.3390/diagnostics13203281
work_keys_str_mv AT malorgioamos usingvisualpatienttoshowvitalsignpredictionsacomputerbasedmixedquantitativeandqualitativesimulationstudy
AT henckertdavid usingvisualpatienttoshowvitalsignpredictionsacomputerbasedmixedquantitativeandqualitativesimulationstudy
AT schweigergiovanna usingvisualpatienttoshowvitalsignpredictionsacomputerbasedmixedquantitativeandqualitativesimulationstudy
AT braunjulia usingvisualpatienttoshowvitalsignpredictionsacomputerbasedmixedquantitativeandqualitativesimulationstudy
AT zacharowskikai usingvisualpatienttoshowvitalsignpredictionsacomputerbasedmixedquantitativeandqualitativesimulationstudy
AT raimannflorianj usingvisualpatienttoshowvitalsignpredictionsacomputerbasedmixedquantitativeandqualitativesimulationstudy
AT piekarskiflorian usingvisualpatienttoshowvitalsignpredictionsacomputerbasedmixedquantitativeandqualitativesimulationstudy
AT meybohmpatrick usingvisualpatienttoshowvitalsignpredictionsacomputerbasedmixedquantitativeandqualitativesimulationstudy
AT hottenrottsebastian usingvisualpatienttoshowvitalsignpredictionsacomputerbasedmixedquantitativeandqualitativesimulationstudy
AT froehlichcorinna usingvisualpatienttoshowvitalsignpredictionsacomputerbasedmixedquantitativeandqualitativesimulationstudy
AT spahndonatr usingvisualpatienttoshowvitalsignpredictionsacomputerbasedmixedquantitativeandqualitativesimulationstudy
AT noethigerchristophb usingvisualpatienttoshowvitalsignpredictionsacomputerbasedmixedquantitativeandqualitativesimulationstudy
AT tscholldavidw usingvisualpatienttoshowvitalsignpredictionsacomputerbasedmixedquantitativeandqualitativesimulationstudy
AT rochetadzior usingvisualpatienttoshowvitalsignpredictionsacomputerbasedmixedquantitativeandqualitativesimulationstudy