Cargando…

Assuring the safety of AI-based clinical decision support systems: a case study of the AI Clinician for sepsis treatment

OBJECTIVES: Establishing confidence in the safety of Artificial Intelligence (AI)-based clinical decision support systems is important prior to clinical deployment and regulatory approval for systems with increasing autonomy. Here, we undertook safety assurance of the AI Clinician, a previously publ...

Descripción completa

Detalles Bibliográficos
Autores principales: Festor, Paul, Jia, Yan, Gordon, Anthony C, Faisal, A Aldo, Habli, Ibrahim, Komorowski, Matthieu
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BMJ Publishing Group 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9289024/
https://www.ncbi.nlm.nih.gov/pubmed/35851286
http://dx.doi.org/10.1136/bmjhci-2022-100549
_version_ 1784748572539879424
author Festor, Paul
Jia, Yan
Gordon, Anthony C
Faisal, A Aldo
Habli, Ibrahim
Komorowski, Matthieu
author_facet Festor, Paul
Jia, Yan
Gordon, Anthony C
Faisal, A Aldo
Habli, Ibrahim
Komorowski, Matthieu
author_sort Festor, Paul
collection PubMed
description OBJECTIVES: Establishing confidence in the safety of Artificial Intelligence (AI)-based clinical decision support systems is important prior to clinical deployment and regulatory approval for systems with increasing autonomy. Here, we undertook safety assurance of the AI Clinician, a previously published reinforcement learning-based treatment recommendation system for sepsis. METHODS: As part of the safety assurance, we defined four clinical hazards in sepsis resuscitation based on clinical expert opinion and the existing literature. We then identified a set of unsafe scenarios, intended to limit the action space of the AI agent with the goal of reducing the likelihood of hazardous decisions. RESULTS: Using a subset of the Medical Information Mart for Intensive Care (MIMIC-III) database, we demonstrated that our previously published ‘AI clinician’ recommended fewer hazardous decisions than human clinicians in three out of our four predefined clinical scenarios, while the difference was not statistically significant in the fourth scenario. Then, we modified the reward function to satisfy our safety constraints and trained a new AI Clinician agent. The retrained model shows enhanced safety, without negatively impacting model performance. DISCUSSION: While some contextual patient information absent from the data may have pushed human clinicians to take hazardous actions, the data were curated to limit the impact of this confounder. CONCLUSION: These advances provide a use case for the systematic safety assurance of AI-based clinical systems towards the generation of explicit safety evidence, which could be replicated for other AI applications or other clinical contexts, and inform medical device regulatory bodies.
format Online
Article
Text
id pubmed-9289024
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher BMJ Publishing Group
record_format MEDLINE/PubMed
spelling pubmed-92890242022-08-01 Assuring the safety of AI-based clinical decision support systems: a case study of the AI Clinician for sepsis treatment Festor, Paul Jia, Yan Gordon, Anthony C Faisal, A Aldo Habli, Ibrahim Komorowski, Matthieu BMJ Health Care Inform Original Research OBJECTIVES: Establishing confidence in the safety of Artificial Intelligence (AI)-based clinical decision support systems is important prior to clinical deployment and regulatory approval for systems with increasing autonomy. Here, we undertook safety assurance of the AI Clinician, a previously published reinforcement learning-based treatment recommendation system for sepsis. METHODS: As part of the safety assurance, we defined four clinical hazards in sepsis resuscitation based on clinical expert opinion and the existing literature. We then identified a set of unsafe scenarios, intended to limit the action space of the AI agent with the goal of reducing the likelihood of hazardous decisions. RESULTS: Using a subset of the Medical Information Mart for Intensive Care (MIMIC-III) database, we demonstrated that our previously published ‘AI clinician’ recommended fewer hazardous decisions than human clinicians in three out of our four predefined clinical scenarios, while the difference was not statistically significant in the fourth scenario. Then, we modified the reward function to satisfy our safety constraints and trained a new AI Clinician agent. The retrained model shows enhanced safety, without negatively impacting model performance. DISCUSSION: While some contextual patient information absent from the data may have pushed human clinicians to take hazardous actions, the data were curated to limit the impact of this confounder. CONCLUSION: These advances provide a use case for the systematic safety assurance of AI-based clinical systems towards the generation of explicit safety evidence, which could be replicated for other AI applications or other clinical contexts, and inform medical device regulatory bodies. BMJ Publishing Group 2022-07-14 /pmc/articles/PMC9289024/ /pubmed/35851286 http://dx.doi.org/10.1136/bmjhci-2022-100549 Text en © Author(s) (or their employer(s)) 2022. Re-use permitted under CC BY. Published by BMJ. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/licenses/by/4.0/.
spellingShingle Original Research
Festor, Paul
Jia, Yan
Gordon, Anthony C
Faisal, A Aldo
Habli, Ibrahim
Komorowski, Matthieu
Assuring the safety of AI-based clinical decision support systems: a case study of the AI Clinician for sepsis treatment
title Assuring the safety of AI-based clinical decision support systems: a case study of the AI Clinician for sepsis treatment
title_full Assuring the safety of AI-based clinical decision support systems: a case study of the AI Clinician for sepsis treatment
title_fullStr Assuring the safety of AI-based clinical decision support systems: a case study of the AI Clinician for sepsis treatment
title_full_unstemmed Assuring the safety of AI-based clinical decision support systems: a case study of the AI Clinician for sepsis treatment
title_short Assuring the safety of AI-based clinical decision support systems: a case study of the AI Clinician for sepsis treatment
title_sort assuring the safety of ai-based clinical decision support systems: a case study of the ai clinician for sepsis treatment
topic Original Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9289024/
https://www.ncbi.nlm.nih.gov/pubmed/35851286
http://dx.doi.org/10.1136/bmjhci-2022-100549
work_keys_str_mv AT festorpaul assuringthesafetyofaibasedclinicaldecisionsupportsystemsacasestudyoftheaiclinicianforsepsistreatment
AT jiayan assuringthesafetyofaibasedclinicaldecisionsupportsystemsacasestudyoftheaiclinicianforsepsistreatment
AT gordonanthonyc assuringthesafetyofaibasedclinicaldecisionsupportsystemsacasestudyoftheaiclinicianforsepsistreatment
AT faisalaaldo assuringthesafetyofaibasedclinicaldecisionsupportsystemsacasestudyoftheaiclinicianforsepsistreatment
AT habliibrahim assuringthesafetyofaibasedclinicaldecisionsupportsystemsacasestudyoftheaiclinicianforsepsistreatment
AT komorowskimatthieu assuringthesafetyofaibasedclinicaldecisionsupportsystemsacasestudyoftheaiclinicianforsepsistreatment