Cargando…

Superhuman performance on sepsis MIMIC-III data by distributional reinforcement learning

We present a novel setup for treating sepsis using distributional reinforcement learning (RL). Sepsis is a life-threatening medical emergency. Its treatment is considered to be a challenging high-stakes decision-making problem, which has to procedurally account for risk. Treating sepsis by machine l...

Descripción completa

Detalles Bibliográficos
Autores principales: Böck, Markus, Malle, Julien, Pasterk, Daniel, Kukina, Hrvoje, Hasani, Ramin, Heitzinger, Clemens
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9632869/
https://www.ncbi.nlm.nih.gov/pubmed/36327195
http://dx.doi.org/10.1371/journal.pone.0275358
_version_ 1784824132711481344
author Böck, Markus
Malle, Julien
Pasterk, Daniel
Kukina, Hrvoje
Hasani, Ramin
Heitzinger, Clemens
author_facet Böck, Markus
Malle, Julien
Pasterk, Daniel
Kukina, Hrvoje
Hasani, Ramin
Heitzinger, Clemens
author_sort Böck, Markus
collection PubMed
description We present a novel setup for treating sepsis using distributional reinforcement learning (RL). Sepsis is a life-threatening medical emergency. Its treatment is considered to be a challenging high-stakes decision-making problem, which has to procedurally account for risk. Treating sepsis by machine learning algorithms is difficult due to a couple of reasons: There is limited and error-afflicted initial data in a highly complex biological system combined with the need to make robust, transparent and safe decisions. We demonstrate a suitable method that combines data imputation by a kNN model using a custom distance with state representation by discretization using clustering, and that enables superhuman decision-making using speedy Q-learning in the framework of distributional RL. Compared to clinicians, the recovery rate is increased by more than 3% on the test data set. Our results illustrate how risk-aware RL agents can play a decisive role in critical situations such as the treatment of sepsis patients, a situation acerbated due to the COVID-19 pandemic (Martineau 2020). In addition, we emphasize the tractability of the methodology and the learning behavior while addressing some criticisms of the previous work (Komorowski et al. 2018) on this topic.
format Online
Article
Text
id pubmed-9632869
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-96328692022-11-04 Superhuman performance on sepsis MIMIC-III data by distributional reinforcement learning Böck, Markus Malle, Julien Pasterk, Daniel Kukina, Hrvoje Hasani, Ramin Heitzinger, Clemens PLoS One Research Article We present a novel setup for treating sepsis using distributional reinforcement learning (RL). Sepsis is a life-threatening medical emergency. Its treatment is considered to be a challenging high-stakes decision-making problem, which has to procedurally account for risk. Treating sepsis by machine learning algorithms is difficult due to a couple of reasons: There is limited and error-afflicted initial data in a highly complex biological system combined with the need to make robust, transparent and safe decisions. We demonstrate a suitable method that combines data imputation by a kNN model using a custom distance with state representation by discretization using clustering, and that enables superhuman decision-making using speedy Q-learning in the framework of distributional RL. Compared to clinicians, the recovery rate is increased by more than 3% on the test data set. Our results illustrate how risk-aware RL agents can play a decisive role in critical situations such as the treatment of sepsis patients, a situation acerbated due to the COVID-19 pandemic (Martineau 2020). In addition, we emphasize the tractability of the methodology and the learning behavior while addressing some criticisms of the previous work (Komorowski et al. 2018) on this topic. Public Library of Science 2022-11-03 /pmc/articles/PMC9632869/ /pubmed/36327195 http://dx.doi.org/10.1371/journal.pone.0275358 Text en © 2022 Böck et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Böck, Markus
Malle, Julien
Pasterk, Daniel
Kukina, Hrvoje
Hasani, Ramin
Heitzinger, Clemens
Superhuman performance on sepsis MIMIC-III data by distributional reinforcement learning
title Superhuman performance on sepsis MIMIC-III data by distributional reinforcement learning
title_full Superhuman performance on sepsis MIMIC-III data by distributional reinforcement learning
title_fullStr Superhuman performance on sepsis MIMIC-III data by distributional reinforcement learning
title_full_unstemmed Superhuman performance on sepsis MIMIC-III data by distributional reinforcement learning
title_short Superhuman performance on sepsis MIMIC-III data by distributional reinforcement learning
title_sort superhuman performance on sepsis mimic-iii data by distributional reinforcement learning
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9632869/
https://www.ncbi.nlm.nih.gov/pubmed/36327195
http://dx.doi.org/10.1371/journal.pone.0275358
work_keys_str_mv AT bockmarkus superhumanperformanceonsepsismimiciiidatabydistributionalreinforcementlearning
AT mallejulien superhumanperformanceonsepsismimiciiidatabydistributionalreinforcementlearning
AT pasterkdaniel superhumanperformanceonsepsismimiciiidatabydistributionalreinforcementlearning
AT kukinahrvoje superhumanperformanceonsepsismimiciiidatabydistributionalreinforcementlearning
AT hasaniramin superhumanperformanceonsepsismimiciiidatabydistributionalreinforcementlearning
AT heitzingerclemens superhumanperformanceonsepsismimiciiidatabydistributionalreinforcementlearning