Cargando…
Critiquing the Reasons for Making Artificial Moral Agents
Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral ag...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer Netherlands
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6591188/ https://www.ncbi.nlm.nih.gov/pubmed/29460081 http://dx.doi.org/10.1007/s11948-018-0030-8 |
_version_ | 1783429676124864512 |
---|---|
author | van Wynsberghe, Aimee Robbins, Scott |
author_facet | van Wynsberghe, Aimee Robbins, Scott |
author_sort | van Wynsberghe, Aimee |
collection | PubMed |
description | Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents (AMA). Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and building these machines would lead to a better understanding of human morality. Although some scholars have challenged the very initiative to develop AMAs, what is currently missing from the debate is a closer examination of the reasons offered by machine ethicists to justify the development of AMAs. This closer examination is especially needed because of the amount of funding currently being allocated to the development of AMAs (from funders like Elon Musk) coupled with the amount of attention researchers and industry leaders receive in the media for their efforts in this direction. The stakes in this debate are high because moral robots would make demands on society; answers to a host of pending questions about what counts as an AMA and whether they are morally responsible for their behavior or not. This paper shifts the burden of proof back to the machine ethicists demanding that they give good reasons to build AMAs. The paper argues that until this is done, the development of commercially available AMAs should not proceed further. |
format | Online Article Text |
id | pubmed-6591188 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2018 |
publisher | Springer Netherlands |
record_format | MEDLINE/PubMed |
spelling | pubmed-65911882019-07-11 Critiquing the Reasons for Making Artificial Moral Agents van Wynsberghe, Aimee Robbins, Scott Sci Eng Ethics Original Paper Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents (AMA). Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and building these machines would lead to a better understanding of human morality. Although some scholars have challenged the very initiative to develop AMAs, what is currently missing from the debate is a closer examination of the reasons offered by machine ethicists to justify the development of AMAs. This closer examination is especially needed because of the amount of funding currently being allocated to the development of AMAs (from funders like Elon Musk) coupled with the amount of attention researchers and industry leaders receive in the media for their efforts in this direction. The stakes in this debate are high because moral robots would make demands on society; answers to a host of pending questions about what counts as an AMA and whether they are morally responsible for their behavior or not. This paper shifts the burden of proof back to the machine ethicists demanding that they give good reasons to build AMAs. The paper argues that until this is done, the development of commercially available AMAs should not proceed further. Springer Netherlands 2018-02-19 2019 /pmc/articles/PMC6591188/ /pubmed/29460081 http://dx.doi.org/10.1007/s11948-018-0030-8 Text en © The Author(s) 2018 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. |
spellingShingle | Original Paper van Wynsberghe, Aimee Robbins, Scott Critiquing the Reasons for Making Artificial Moral Agents |
title | Critiquing the Reasons for Making Artificial Moral Agents |
title_full | Critiquing the Reasons for Making Artificial Moral Agents |
title_fullStr | Critiquing the Reasons for Making Artificial Moral Agents |
title_full_unstemmed | Critiquing the Reasons for Making Artificial Moral Agents |
title_short | Critiquing the Reasons for Making Artificial Moral Agents |
title_sort | critiquing the reasons for making artificial moral agents |
topic | Original Paper |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6591188/ https://www.ncbi.nlm.nih.gov/pubmed/29460081 http://dx.doi.org/10.1007/s11948-018-0030-8 |
work_keys_str_mv | AT vanwynsbergheaimee critiquingthereasonsformakingartificialmoralagents AT robbinsscott critiquingthereasonsformakingartificialmoralagents |