Cargando…

Instrumental Robots

Advances in artificial intelligence research allow us to build fairly sophisticated agents: robots and computer programs capable of acting and deciding on their own (in some sense). These systems raise questions about who is responsible when something goes wrong—when such systems harm or kill humans...

Descripción completa

Detalles Bibliográficos
Autor principal: Köhler, Sebastian
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Netherlands 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7755622/
https://www.ncbi.nlm.nih.gov/pubmed/32813121
http://dx.doi.org/10.1007/s11948-020-00259-5
_version_ 1783626387456786432
author Köhler, Sebastian
author_facet Köhler, Sebastian
author_sort Köhler, Sebastian
collection PubMed
description Advances in artificial intelligence research allow us to build fairly sophisticated agents: robots and computer programs capable of acting and deciding on their own (in some sense). These systems raise questions about who is responsible when something goes wrong—when such systems harm or kill humans. In a recent paper, Sven Nyholm has suggested that, because current AI will likely possess what we might call “supervised agency”, the theory of responsibility for individual agency is the wrong place to look for an answer to the question of responsibility. Instead, or so argues Nyholm, because supervised agency is a form of collaborative agency—of acting together—the right place to look is the theory of collaborative responsibility—responsibility in cases of acting together. This paper concedes that current AI will possess supervised agency, but argues that it is nevertheless wrong to think of the relevant human-AI interactions as a form of collaborative agency and, hence, that responsibility in cases of collaborative agency is not the right place to look for the responsibility-grounding relation in human-AI interactions. It also suggests that the right place to look for this responsibility-grounding relation in human-AI interactions is the use of certain sorts of agents as instruments.
format Online
Article
Text
id pubmed-7755622
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Springer Netherlands
record_format MEDLINE/PubMed
spelling pubmed-77556222020-12-28 Instrumental Robots Köhler, Sebastian Sci Eng Ethics Original Research/Scholarship Advances in artificial intelligence research allow us to build fairly sophisticated agents: robots and computer programs capable of acting and deciding on their own (in some sense). These systems raise questions about who is responsible when something goes wrong—when such systems harm or kill humans. In a recent paper, Sven Nyholm has suggested that, because current AI will likely possess what we might call “supervised agency”, the theory of responsibility for individual agency is the wrong place to look for an answer to the question of responsibility. Instead, or so argues Nyholm, because supervised agency is a form of collaborative agency—of acting together—the right place to look is the theory of collaborative responsibility—responsibility in cases of acting together. This paper concedes that current AI will possess supervised agency, but argues that it is nevertheless wrong to think of the relevant human-AI interactions as a form of collaborative agency and, hence, that responsibility in cases of collaborative agency is not the right place to look for the responsibility-grounding relation in human-AI interactions. It also suggests that the right place to look for this responsibility-grounding relation in human-AI interactions is the use of certain sorts of agents as instruments. Springer Netherlands 2020-08-19 2020 /pmc/articles/PMC7755622/ /pubmed/32813121 http://dx.doi.org/10.1007/s11948-020-00259-5 Text en © The Author(s) 2020 Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
spellingShingle Original Research/Scholarship
Köhler, Sebastian
Instrumental Robots
title Instrumental Robots
title_full Instrumental Robots
title_fullStr Instrumental Robots
title_full_unstemmed Instrumental Robots
title_short Instrumental Robots
title_sort instrumental robots
topic Original Research/Scholarship
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7755622/
https://www.ncbi.nlm.nih.gov/pubmed/32813121
http://dx.doi.org/10.1007/s11948-020-00259-5
work_keys_str_mv AT kohlersebastian instrumentalrobots