Cargando…

In AI We Trust: Ethics, Artificial Intelligence, and Reliability

One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the po...

Descripción completa

Detalles Bibliográficos
Autor principal: Ryan, Mark
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Netherlands 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7550313/
https://www.ncbi.nlm.nih.gov/pubmed/32524425
http://dx.doi.org/10.1007/s11948-020-00228-y
_version_ 1783592948888240128
author Ryan, Mark
author_facet Ryan, Mark
author_sort Ryan, Mark
collection PubMed
description One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important and defining activities in human relationships, so proposing that AI should be trusted, is a very serious claim. This paper will show that AI cannot be something that has the capacity to be trusted according to the most prevalent definitions of trust because it does not possess emotive states or can be held responsible for their actions—requirements of the affective and normative accounts of trust. While AI meets all of the requirements of the rational account of trust, it will be shown that this is not actually a type of trust at all, but is instead, a form of reliance. Ultimately, even complex machines such as AI should not be viewed as trustworthy as this undermines the value of interpersonal trust, anthropomorphises AI, and diverts responsibility from those developing and using them.
format Online
Article
Text
id pubmed-7550313
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Springer Netherlands
record_format MEDLINE/PubMed
spelling pubmed-75503132020-10-19 In AI We Trust: Ethics, Artificial Intelligence, and Reliability Ryan, Mark Sci Eng Ethics Original Research/Scholarship One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important and defining activities in human relationships, so proposing that AI should be trusted, is a very serious claim. This paper will show that AI cannot be something that has the capacity to be trusted according to the most prevalent definitions of trust because it does not possess emotive states or can be held responsible for their actions—requirements of the affective and normative accounts of trust. While AI meets all of the requirements of the rational account of trust, it will be shown that this is not actually a type of trust at all, but is instead, a form of reliance. Ultimately, even complex machines such as AI should not be viewed as trustworthy as this undermines the value of interpersonal trust, anthropomorphises AI, and diverts responsibility from those developing and using them. Springer Netherlands 2020-06-10 2020 /pmc/articles/PMC7550313/ /pubmed/32524425 http://dx.doi.org/10.1007/s11948-020-00228-y Text en © The Author(s) 2020 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Original Research/Scholarship
Ryan, Mark
In AI We Trust: Ethics, Artificial Intelligence, and Reliability
title In AI We Trust: Ethics, Artificial Intelligence, and Reliability
title_full In AI We Trust: Ethics, Artificial Intelligence, and Reliability
title_fullStr In AI We Trust: Ethics, Artificial Intelligence, and Reliability
title_full_unstemmed In AI We Trust: Ethics, Artificial Intelligence, and Reliability
title_short In AI We Trust: Ethics, Artificial Intelligence, and Reliability
title_sort in ai we trust: ethics, artificial intelligence, and reliability
topic Original Research/Scholarship
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7550313/
https://www.ncbi.nlm.nih.gov/pubmed/32524425
http://dx.doi.org/10.1007/s11948-020-00228-y
work_keys_str_mv AT ryanmark inaiwetrustethicsartificialintelligenceandreliability