Cargando…
Moral Judgments of Human vs. AI Agents in Moral Dilemmas
Artificial intelligence has quickly integrated into human society and its moral decision-making has also begun to slowly seep into our lives. The significance of moral judgment research on artificial intelligence behavior is becoming increasingly prominent. The present research aims at examining how...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9951994/ https://www.ncbi.nlm.nih.gov/pubmed/36829410 http://dx.doi.org/10.3390/bs13020181 |
_version_ | 1784893519161196544 |
---|---|
author | Zhang, Yuyan Wu, Jiahua Yu, Feng Xu, Liying |
author_facet | Zhang, Yuyan Wu, Jiahua Yu, Feng Xu, Liying |
author_sort | Zhang, Yuyan |
collection | PubMed |
description | Artificial intelligence has quickly integrated into human society and its moral decision-making has also begun to slowly seep into our lives. The significance of moral judgment research on artificial intelligence behavior is becoming increasingly prominent. The present research aims at examining how people make moral judgments about the behavior of artificial intelligence agents in a trolley dilemma where people are usually driven by controlled cognitive processes, and in a footbridge dilemma where people are usually driven by automatic emotional responses. Through three experiments (n = 626), we found that in the trolley dilemma (Experiment 1), the agent type rather than the actual action influenced people’s moral judgments. Specifically, participants rated AI agents’ behavior as more immoral and deserving of more blame than humans’ behavior. Conversely, in the footbridge dilemma (Experiment 2), the actual action rather than the agent type influenced people’s moral judgments. Specifically, participants rated action (a utilitarian act) as less moral and permissible and more morally wrong and blameworthy than inaction (a deontological act). A mixed-design experiment provided a pattern of results consistent with Experiment 1 and Experiment 2 (Experiment 3). This suggests that in different types of moral dilemmas, people adapt different modes of moral judgment to artificial intelligence, this may be explained by that when people make moral judgments in different types of moral dilemmas, they are engaging different processing systems. |
format | Online Article Text |
id | pubmed-9951994 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-99519942023-02-25 Moral Judgments of Human vs. AI Agents in Moral Dilemmas Zhang, Yuyan Wu, Jiahua Yu, Feng Xu, Liying Behav Sci (Basel) Article Artificial intelligence has quickly integrated into human society and its moral decision-making has also begun to slowly seep into our lives. The significance of moral judgment research on artificial intelligence behavior is becoming increasingly prominent. The present research aims at examining how people make moral judgments about the behavior of artificial intelligence agents in a trolley dilemma where people are usually driven by controlled cognitive processes, and in a footbridge dilemma where people are usually driven by automatic emotional responses. Through three experiments (n = 626), we found that in the trolley dilemma (Experiment 1), the agent type rather than the actual action influenced people’s moral judgments. Specifically, participants rated AI agents’ behavior as more immoral and deserving of more blame than humans’ behavior. Conversely, in the footbridge dilemma (Experiment 2), the actual action rather than the agent type influenced people’s moral judgments. Specifically, participants rated action (a utilitarian act) as less moral and permissible and more morally wrong and blameworthy than inaction (a deontological act). A mixed-design experiment provided a pattern of results consistent with Experiment 1 and Experiment 2 (Experiment 3). This suggests that in different types of moral dilemmas, people adapt different modes of moral judgment to artificial intelligence, this may be explained by that when people make moral judgments in different types of moral dilemmas, they are engaging different processing systems. MDPI 2023-02-16 /pmc/articles/PMC9951994/ /pubmed/36829410 http://dx.doi.org/10.3390/bs13020181 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Zhang, Yuyan Wu, Jiahua Yu, Feng Xu, Liying Moral Judgments of Human vs. AI Agents in Moral Dilemmas |
title | Moral Judgments of Human vs. AI Agents in Moral Dilemmas |
title_full | Moral Judgments of Human vs. AI Agents in Moral Dilemmas |
title_fullStr | Moral Judgments of Human vs. AI Agents in Moral Dilemmas |
title_full_unstemmed | Moral Judgments of Human vs. AI Agents in Moral Dilemmas |
title_short | Moral Judgments of Human vs. AI Agents in Moral Dilemmas |
title_sort | moral judgments of human vs. ai agents in moral dilemmas |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9951994/ https://www.ncbi.nlm.nih.gov/pubmed/36829410 http://dx.doi.org/10.3390/bs13020181 |
work_keys_str_mv | AT zhangyuyan moraljudgmentsofhumanvsaiagentsinmoraldilemmas AT wujiahua moraljudgmentsofhumanvsaiagentsinmoraldilemmas AT yufeng moraljudgmentsofhumanvsaiagentsinmoraldilemmas AT xuliying moraljudgmentsofhumanvsaiagentsinmoraldilemmas |