Cargando…
AI and Ethics When Human Beings Collaborate With AI Agents
The relationship between a human being and an AI system has to be considered as a collaborative process between two agents during the performance of an activity. When there is a collaboration between two people, a fundamental characteristic of that collaboration is that there is co-supervision, with...
Autor principal: | |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8931455/ https://www.ncbi.nlm.nih.gov/pubmed/35310226 http://dx.doi.org/10.3389/fpsyg.2022.836650 |
_version_ | 1784671266992553984 |
---|---|
author | Cañas, José J. |
author_facet | Cañas, José J. |
author_sort | Cañas, José J. |
collection | PubMed |
description | The relationship between a human being and an AI system has to be considered as a collaborative process between two agents during the performance of an activity. When there is a collaboration between two people, a fundamental characteristic of that collaboration is that there is co-supervision, with each agent supervising the actions of the other. Such supervision ensures that the activity achieves its objectives, but it also means that responsibility for the consequences of the activity is shared. If there is no co-supervision, neither collaborator can be held co-responsible for the actions of the other. When the collaboration is between a person and an AI system, co-supervision is also necessary to ensure that the objectives of the activity are achieved, but this also means that there is co-responsibility for the consequences of the activities. Therefore, if each agent's responsibility for the consequences of the activity depends on the effectiveness and efficiency of the supervision that that agent performs over the other agent's actions, it will be necessary to take into account the way in which that supervision is carried out and the factors on which it depends. In the case of the human supervision of the actions of an AI system, there is a wealth of psychological research that can help us to establish cognitive and non-cognitive boundaries and their relationship to the responsibility of humans collaborating with AI systems. There is also psychological research on how an external observer supervises and evaluates human actions. This research can be used to programme AI systems in such a way that the boundaries of responsibility for AI systems can be established. In this article, we will describe some examples of how such research on the task of supervising the actions of another agent can be used to establish lines of shared responsibility between a human being and an AI system. The article will conclude by proposing that we should develop a methodology for assessing responsibility based on the results of the collaboration between a human being and an AI agent during the performance of one common activity. |
format | Online Article Text |
id | pubmed-8931455 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-89314552022-03-19 AI and Ethics When Human Beings Collaborate With AI Agents Cañas, José J. Front Psychol Psychology The relationship between a human being and an AI system has to be considered as a collaborative process between two agents during the performance of an activity. When there is a collaboration between two people, a fundamental characteristic of that collaboration is that there is co-supervision, with each agent supervising the actions of the other. Such supervision ensures that the activity achieves its objectives, but it also means that responsibility for the consequences of the activity is shared. If there is no co-supervision, neither collaborator can be held co-responsible for the actions of the other. When the collaboration is between a person and an AI system, co-supervision is also necessary to ensure that the objectives of the activity are achieved, but this also means that there is co-responsibility for the consequences of the activities. Therefore, if each agent's responsibility for the consequences of the activity depends on the effectiveness and efficiency of the supervision that that agent performs over the other agent's actions, it will be necessary to take into account the way in which that supervision is carried out and the factors on which it depends. In the case of the human supervision of the actions of an AI system, there is a wealth of psychological research that can help us to establish cognitive and non-cognitive boundaries and their relationship to the responsibility of humans collaborating with AI systems. There is also psychological research on how an external observer supervises and evaluates human actions. This research can be used to programme AI systems in such a way that the boundaries of responsibility for AI systems can be established. In this article, we will describe some examples of how such research on the task of supervising the actions of another agent can be used to establish lines of shared responsibility between a human being and an AI system. The article will conclude by proposing that we should develop a methodology for assessing responsibility based on the results of the collaboration between a human being and an AI agent during the performance of one common activity. Frontiers Media S.A. 2022-03-04 /pmc/articles/PMC8931455/ /pubmed/35310226 http://dx.doi.org/10.3389/fpsyg.2022.836650 Text en Copyright © 2022 Cañas. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Psychology Cañas, José J. AI and Ethics When Human Beings Collaborate With AI Agents |
title | AI and Ethics When Human Beings Collaborate With AI Agents |
title_full | AI and Ethics When Human Beings Collaborate With AI Agents |
title_fullStr | AI and Ethics When Human Beings Collaborate With AI Agents |
title_full_unstemmed | AI and Ethics When Human Beings Collaborate With AI Agents |
title_short | AI and Ethics When Human Beings Collaborate With AI Agents |
title_sort | ai and ethics when human beings collaborate with ai agents |
topic | Psychology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8931455/ https://www.ncbi.nlm.nih.gov/pubmed/35310226 http://dx.doi.org/10.3389/fpsyg.2022.836650 |
work_keys_str_mv | AT canasjosej aiandethicswhenhumanbeingscollaboratewithaiagents |