Cargando…
Should Artificial Intelligence be used to support clinical ethical decision-making? A systematic review of reasons
BACKGROUND: Healthcare providers have to make ethically complex clinical decisions which may be a source of stress. Researchers have recently introduced Artificial Intelligence (AI)-based applications to assist in clinical ethical decision-making. However, the use of such tools is controversial. Thi...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10327319/ https://www.ncbi.nlm.nih.gov/pubmed/37415172 http://dx.doi.org/10.1186/s12910-023-00929-6 |
_version_ | 1785069599207718912 |
---|---|
author | Benzinger, Lasse Ursin, Frank Balke, Wolf-Tilo Kacprowski, Tim Salloch, Sabine |
author_facet | Benzinger, Lasse Ursin, Frank Balke, Wolf-Tilo Kacprowski, Tim Salloch, Sabine |
author_sort | Benzinger, Lasse |
collection | PubMed |
description | BACKGROUND: Healthcare providers have to make ethically complex clinical decisions which may be a source of stress. Researchers have recently introduced Artificial Intelligence (AI)-based applications to assist in clinical ethical decision-making. However, the use of such tools is controversial. This review aims to provide a comprehensive overview of the reasons given in the academic literature for and against their use. METHODS: PubMed, Web of Science, Philpapers.org and Google Scholar were searched for all relevant publications. The resulting set of publications was title and abstract screened according to defined inclusion and exclusion criteria, resulting in 44 papers whose full texts were analysed using the Kuckartz method of qualitative text analysis. RESULTS: Artificial Intelligence might increase patient autonomy by improving the accuracy of predictions and allowing patients to receive their preferred treatment. It is thought to increase beneficence by providing reliable information, thereby, supporting surrogate decision-making. Some authors fear that reducing ethical decision-making to statistical correlations may limit autonomy. Others argue that AI may not be able to replicate the process of ethical deliberation because it lacks human characteristics. Concerns have been raised about issues of justice, as AI may replicate existing biases in the decision-making process. CONCLUSIONS: The prospective benefits of using AI in clinical ethical decision-making are manifold, but its development and use should be undertaken carefully to avoid ethical pitfalls. Several issues that are central to the discussion of Clinical Decision Support Systems, such as justice, explicability or human–machine interaction, have been neglected in the debate on AI for clinical ethics so far. TRIAL REGISTRATION: This review is registered at Open Science Framework (https://osf.io/wvcs9). SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12910-023-00929-6. |
format | Online Article Text |
id | pubmed-10327319 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-103273192023-07-08 Should Artificial Intelligence be used to support clinical ethical decision-making? A systematic review of reasons Benzinger, Lasse Ursin, Frank Balke, Wolf-Tilo Kacprowski, Tim Salloch, Sabine BMC Med Ethics Research BACKGROUND: Healthcare providers have to make ethically complex clinical decisions which may be a source of stress. Researchers have recently introduced Artificial Intelligence (AI)-based applications to assist in clinical ethical decision-making. However, the use of such tools is controversial. This review aims to provide a comprehensive overview of the reasons given in the academic literature for and against their use. METHODS: PubMed, Web of Science, Philpapers.org and Google Scholar were searched for all relevant publications. The resulting set of publications was title and abstract screened according to defined inclusion and exclusion criteria, resulting in 44 papers whose full texts were analysed using the Kuckartz method of qualitative text analysis. RESULTS: Artificial Intelligence might increase patient autonomy by improving the accuracy of predictions and allowing patients to receive their preferred treatment. It is thought to increase beneficence by providing reliable information, thereby, supporting surrogate decision-making. Some authors fear that reducing ethical decision-making to statistical correlations may limit autonomy. Others argue that AI may not be able to replicate the process of ethical deliberation because it lacks human characteristics. Concerns have been raised about issues of justice, as AI may replicate existing biases in the decision-making process. CONCLUSIONS: The prospective benefits of using AI in clinical ethical decision-making are manifold, but its development and use should be undertaken carefully to avoid ethical pitfalls. Several issues that are central to the discussion of Clinical Decision Support Systems, such as justice, explicability or human–machine interaction, have been neglected in the debate on AI for clinical ethics so far. TRIAL REGISTRATION: This review is registered at Open Science Framework (https://osf.io/wvcs9). SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12910-023-00929-6. BioMed Central 2023-07-06 /pmc/articles/PMC10327319/ /pubmed/37415172 http://dx.doi.org/10.1186/s12910-023-00929-6 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data. |
spellingShingle | Research Benzinger, Lasse Ursin, Frank Balke, Wolf-Tilo Kacprowski, Tim Salloch, Sabine Should Artificial Intelligence be used to support clinical ethical decision-making? A systematic review of reasons |
title | Should Artificial Intelligence be used to support clinical ethical decision-making? A systematic review of reasons |
title_full | Should Artificial Intelligence be used to support clinical ethical decision-making? A systematic review of reasons |
title_fullStr | Should Artificial Intelligence be used to support clinical ethical decision-making? A systematic review of reasons |
title_full_unstemmed | Should Artificial Intelligence be used to support clinical ethical decision-making? A systematic review of reasons |
title_short | Should Artificial Intelligence be used to support clinical ethical decision-making? A systematic review of reasons |
title_sort | should artificial intelligence be used to support clinical ethical decision-making? a systematic review of reasons |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10327319/ https://www.ncbi.nlm.nih.gov/pubmed/37415172 http://dx.doi.org/10.1186/s12910-023-00929-6 |
work_keys_str_mv | AT benzingerlasse shouldartificialintelligencebeusedtosupportclinicalethicaldecisionmakingasystematicreviewofreasons AT ursinfrank shouldartificialintelligencebeusedtosupportclinicalethicaldecisionmakingasystematicreviewofreasons AT balkewolftilo shouldartificialintelligencebeusedtosupportclinicalethicaldecisionmakingasystematicreviewofreasons AT kacprowskitim shouldartificialintelligencebeusedtosupportclinicalethicaldecisionmakingasystematicreviewofreasons AT sallochsabine shouldartificialintelligencebeusedtosupportclinicalethicaldecisionmakingasystematicreviewofreasons |