Risk of Injury in Moral Dilemmas With Autonomous Vehicles

As autonomous machines, such as automated vehicles (AVs) and robots, become pervasive in society, they will inevitably face moral dilemmas where they must make decisions that risk injuring humans. However, prior research has framed these dilemmas in starkly simple terms, i.e., framing decisions as l...

Descripción completa

Detalles Bibliográficos
Autores principales: de Melo, Celso M., Marsella, Stacy, Gratch, Jonathan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8239464/
https://www.ncbi.nlm.nih.gov/pubmed/34212006
http://dx.doi.org/10.3389/frobt.2020.572529
_version_ 1783715083655839744
author de Melo, Celso M.
Marsella, Stacy
Gratch, Jonathan
author_facet de Melo, Celso M.
Marsella, Stacy
Gratch, Jonathan
author_sort de Melo, Celso M.
collection PubMed
description As autonomous machines, such as automated vehicles (AVs) and robots, become pervasive in society, they will inevitably face moral dilemmas where they must make decisions that risk injuring humans. However, prior research has framed these dilemmas in starkly simple terms, i.e., framing decisions as life and death and neglecting the influence of risk of injury to the involved parties on the outcome. Here, we focus on this gap and present experimental work that systematically studies the effect of risk of injury on the decisions people make in these dilemmas. In four experiments, participants were asked to program their AVs to either save five pedestrians, which we refer to as the utilitarian choice, or save the driver, which we refer to as the nonutilitarian choice. The results indicate that most participants made the utilitarian choice but that this choice was moderated in important ways by perceived risk to the driver and risk to the pedestrians. As a second contribution, we demonstrate the value of formulating AV moral dilemmas in a game-theoretic framework that considers the possible influence of others’ behavior. In the fourth experiment, we show that participants were more (less) likely to make the utilitarian choice, the more utilitarian (nonutilitarian) other drivers behaved; furthermore, unlike the game-theoretic prediction that decision-makers inevitably converge to nonutilitarianism, we found significant evidence of utilitarianism. We discuss theoretical implications for our understanding of human decision-making in moral dilemmas and practical guidelines for the design of autonomous machines that solve these dilemmas while, at the same time, being likely to be adopted in practice.
format Online
Article
Text
id pubmed-8239464
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-82394642021-06-30 Risk of Injury in Moral Dilemmas With Autonomous Vehicles de Melo, Celso M. Marsella, Stacy Gratch, Jonathan Front Robot AI Robotics and AI As autonomous machines, such as automated vehicles (AVs) and robots, become pervasive in society, they will inevitably face moral dilemmas where they must make decisions that risk injuring humans. However, prior research has framed these dilemmas in starkly simple terms, i.e., framing decisions as life and death and neglecting the influence of risk of injury to the involved parties on the outcome. Here, we focus on this gap and present experimental work that systematically studies the effect of risk of injury on the decisions people make in these dilemmas. In four experiments, participants were asked to program their AVs to either save five pedestrians, which we refer to as the utilitarian choice, or save the driver, which we refer to as the nonutilitarian choice. The results indicate that most participants made the utilitarian choice but that this choice was moderated in important ways by perceived risk to the driver and risk to the pedestrians. As a second contribution, we demonstrate the value of formulating AV moral dilemmas in a game-theoretic framework that considers the possible influence of others’ behavior. In the fourth experiment, we show that participants were more (less) likely to make the utilitarian choice, the more utilitarian (nonutilitarian) other drivers behaved; furthermore, unlike the game-theoretic prediction that decision-makers inevitably converge to nonutilitarianism, we found significant evidence of utilitarianism. We discuss theoretical implications for our understanding of human decision-making in moral dilemmas and practical guidelines for the design of autonomous machines that solve these dilemmas while, at the same time, being likely to be adopted in practice. Frontiers Media S.A. 2021-01-20 /pmc/articles/PMC8239464/ /pubmed/34212006 http://dx.doi.org/10.3389/frobt.2020.572529 Text en Copyright © 2021 De Melo, Marsella and Gratch. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Robotics and AI
de Melo, Celso M.
Marsella, Stacy
Gratch, Jonathan
Risk of Injury in Moral Dilemmas With Autonomous Vehicles
title Risk of Injury in Moral Dilemmas With Autonomous Vehicles
title_full Risk of Injury in Moral Dilemmas With Autonomous Vehicles
title_fullStr Risk of Injury in Moral Dilemmas With Autonomous Vehicles
title_full_unstemmed Risk of Injury in Moral Dilemmas With Autonomous Vehicles
title_short Risk of Injury in Moral Dilemmas With Autonomous Vehicles
title_sort risk of injury in moral dilemmas with autonomous vehicles
topic Robotics and AI
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8239464/
https://www.ncbi.nlm.nih.gov/pubmed/34212006
http://dx.doi.org/10.3389/frobt.2020.572529
work_keys_str_mv AT demelocelsom riskofinjuryinmoraldilemmaswithautonomousvehicles
AT marsellastacy riskofinjuryinmoraldilemmaswithautonomousvehicles
AT gratchjonathan riskofinjuryinmoraldilemmaswithautonomousvehicles