Cargando…

Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use

In the present article, I will advocate caution against developing artificial moral agents (AMAs) based on the notion that the utilization of preliminary forms of AMAs will potentially negatively feed back on the human social system and on human moral thought itself and its value—e.g., by reinforcin...

Descripción completa

Detalles Bibliográficos
Autor principal: Herzog, Christian
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Netherlands 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7838071/
https://www.ncbi.nlm.nih.gov/pubmed/33496885
http://dx.doi.org/10.1007/s11948-021-00283-z
_version_ 1783643089885200384
author Herzog, Christian
author_facet Herzog, Christian
author_sort Herzog, Christian
collection PubMed
description In the present article, I will advocate caution against developing artificial moral agents (AMAs) based on the notion that the utilization of preliminary forms of AMAs will potentially negatively feed back on the human social system and on human moral thought itself and its value—e.g., by reinforcing social inequalities, diminishing the breadth of employed ethical arguments and the value of character. While scientific investigations into AMAs pose no direct significant threat, I will argue against their premature utilization for practical and economical use. I will base my arguments on two thought experiments. The first thought experiment deals with the potential to generate a replica of an individual’s moral stances with the purpose to increase, what I term, ’moral efficiency’. Hence, as a first risk, an unregulated utilization of premature AMAs in a neoliberal capitalist system is likely to disadvantage those who cannot afford ’moral replicas’ and further reinforce social inequalities. The second thought experiment deals with the idea of a ’moral calculator’. As a second risk, I will argue that, even as a device equally accessible to all and aimed at augmenting human moral deliberation, ’moral calculators’ as preliminary forms of AMAs are likely to diminish the breadth and depth of concepts employed in moral arguments. Again, I base this claim on the idea that the current most dominant economic system rewards increases in productivity. However, increases in efficiency will mostly stem from relying on the outputs of ’moral calculators’ without further scrutiny. Premature AMAs will cover only a limited scope of moral argumentation and, hence, over-reliance on them will narrow human moral thought. In addition and as the third risk, I will argue that an increased disregard of the interior of a moral agent may ensue—a trend that can already be observed in the literature.
format Online
Article
Text
id pubmed-7838071
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Springer Netherlands
record_format MEDLINE/PubMed
spelling pubmed-78380712021-02-01 Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use Herzog, Christian Sci Eng Ethics Original Research/Scholarship In the present article, I will advocate caution against developing artificial moral agents (AMAs) based on the notion that the utilization of preliminary forms of AMAs will potentially negatively feed back on the human social system and on human moral thought itself and its value—e.g., by reinforcing social inequalities, diminishing the breadth of employed ethical arguments and the value of character. While scientific investigations into AMAs pose no direct significant threat, I will argue against their premature utilization for practical and economical use. I will base my arguments on two thought experiments. The first thought experiment deals with the potential to generate a replica of an individual’s moral stances with the purpose to increase, what I term, ’moral efficiency’. Hence, as a first risk, an unregulated utilization of premature AMAs in a neoliberal capitalist system is likely to disadvantage those who cannot afford ’moral replicas’ and further reinforce social inequalities. The second thought experiment deals with the idea of a ’moral calculator’. As a second risk, I will argue that, even as a device equally accessible to all and aimed at augmenting human moral deliberation, ’moral calculators’ as preliminary forms of AMAs are likely to diminish the breadth and depth of concepts employed in moral arguments. Again, I base this claim on the idea that the current most dominant economic system rewards increases in productivity. However, increases in efficiency will mostly stem from relying on the outputs of ’moral calculators’ without further scrutiny. Premature AMAs will cover only a limited scope of moral argumentation and, hence, over-reliance on them will narrow human moral thought. In addition and as the third risk, I will argue that an increased disregard of the interior of a moral agent may ensue—a trend that can already be observed in the literature. Springer Netherlands 2021-01-26 2021 /pmc/articles/PMC7838071/ /pubmed/33496885 http://dx.doi.org/10.1007/s11948-021-00283-z Text en © The Author(s) 2021 Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
spellingShingle Original Research/Scholarship
Herzog, Christian
Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use
title Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use
title_full Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use
title_fullStr Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use
title_full_unstemmed Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use
title_short Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use
title_sort three risks that caution against a premature implementation of artificial moral agents for practical and economical use
topic Original Research/Scholarship
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7838071/
https://www.ncbi.nlm.nih.gov/pubmed/33496885
http://dx.doi.org/10.1007/s11948-021-00283-z
work_keys_str_mv AT herzogchristian threerisksthatcautionagainstaprematureimplementationofartificialmoralagentsforpracticalandeconomicaluse