Cargando…

Trust does not need to be human: it is possible to trust medical AI

In his recent article ‘Limits of trust in medical AI,’ Hatherley argues that, if we believe that the motivations that are usually recognised as relevant for interpersonal trust have to be applied to interactions between humans and medical artificial intelligence, then these systems do not appear to...

Descripción completa

Detalles Bibliográficos
Autores principales: Ferrario, Andrea, Loi, Michele, Viganò, Eleonora
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BMJ Publishing Group 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8165138/
https://www.ncbi.nlm.nih.gov/pubmed/33239471
http://dx.doi.org/10.1136/medethics-2020-106922
_version_ 1783701251415867392
author Ferrario, Andrea
Loi, Michele
Viganò, Eleonora
author_facet Ferrario, Andrea
Loi, Michele
Viganò, Eleonora
author_sort Ferrario, Andrea
collection PubMed
description In his recent article ‘Limits of trust in medical AI,’ Hatherley argues that, if we believe that the motivations that are usually recognised as relevant for interpersonal trust have to be applied to interactions between humans and medical artificial intelligence, then these systems do not appear to be the appropriate objects of trust. In this response, we argue that it is possible to discuss trust in medical artificial intelligence (AI), if one refrains from simply assuming that trust describes human–human interactions. To do so, we consider an account of trust that distinguishes trust from reliance in a way that is compatible with trusting non-human agents. In this account, to trust a medical AI is to rely on it with little monitoring and control of the elements that make it trustworthy. This attitude does not imply specific properties in the AI system that in fact only humans can have. This account of trust is applicable, in particular, to all cases where a physician relies on the medical AI predictions to support his or her decision making.
format Online
Article
Text
id pubmed-8165138
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher BMJ Publishing Group
record_format MEDLINE/PubMed
spelling pubmed-81651382021-06-14 Trust does not need to be human: it is possible to trust medical AI Ferrario, Andrea Loi, Michele Viganò, Eleonora J Med Ethics Response In his recent article ‘Limits of trust in medical AI,’ Hatherley argues that, if we believe that the motivations that are usually recognised as relevant for interpersonal trust have to be applied to interactions between humans and medical artificial intelligence, then these systems do not appear to be the appropriate objects of trust. In this response, we argue that it is possible to discuss trust in medical artificial intelligence (AI), if one refrains from simply assuming that trust describes human–human interactions. To do so, we consider an account of trust that distinguishes trust from reliance in a way that is compatible with trusting non-human agents. In this account, to trust a medical AI is to rely on it with little monitoring and control of the elements that make it trustworthy. This attitude does not imply specific properties in the AI system that in fact only humans can have. This account of trust is applicable, in particular, to all cases where a physician relies on the medical AI predictions to support his or her decision making. BMJ Publishing Group 2021-06 2020-11-25 /pmc/articles/PMC8165138/ /pubmed/33239471 http://dx.doi.org/10.1136/medethics-2020-106922 Text en © Author(s) (or their employer(s)) 2021. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ. https://creativecommons.org/licenses/by-nc/4.0/This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/ (https://creativecommons.org/licenses/by-nc/4.0/) .
spellingShingle Response
Ferrario, Andrea
Loi, Michele
Viganò, Eleonora
Trust does not need to be human: it is possible to trust medical AI
title Trust does not need to be human: it is possible to trust medical AI
title_full Trust does not need to be human: it is possible to trust medical AI
title_fullStr Trust does not need to be human: it is possible to trust medical AI
title_full_unstemmed Trust does not need to be human: it is possible to trust medical AI
title_short Trust does not need to be human: it is possible to trust medical AI
title_sort trust does not need to be human: it is possible to trust medical ai
topic Response
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8165138/
https://www.ncbi.nlm.nih.gov/pubmed/33239471
http://dx.doi.org/10.1136/medethics-2020-106922
work_keys_str_mv AT ferrarioandrea trustdoesnotneedtobehumanitispossibletotrustmedicalai
AT loimichele trustdoesnotneedtobehumanitispossibletotrustmedicalai
AT viganoeleonora trustdoesnotneedtobehumanitispossibletotrustmedicalai