Cargando…
Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates?
Can Artificial Intelligence (AI) be more effective than human instruction for the moral enhancement of people? The author argues that it only would be if the use of this technology were aimed at increasing the individual's capacity to reflectively decide for themselves, rather than at directly...
Autor principal: | |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer Netherlands
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8241637/ https://www.ncbi.nlm.nih.gov/pubmed/34189623 http://dx.doi.org/10.1007/s11948-021-00318-5 |
_version_ | 1783715454715428864 |
---|---|
author | Lara, Francisco |
author_facet | Lara, Francisco |
author_sort | Lara, Francisco |
collection | PubMed |
description | Can Artificial Intelligence (AI) be more effective than human instruction for the moral enhancement of people? The author argues that it only would be if the use of this technology were aimed at increasing the individual's capacity to reflectively decide for themselves, rather than at directly influencing behaviour. To support this, it is shown how a disregard for personal autonomy, in particular, invalidates the main proposals for applying new technologies, both biomedical and AI-based, to moral enhancement. As an alternative to these proposals, this article proposes a virtual assistant that, through dialogue, neutrality and virtual reality technologies, can teach users to make better moral decisions on their own. The author concludes that, as long as certain precautions are taken in its design, such an assistant could do this better than a human instructor adopting the same educational methodology. |
format | Online Article Text |
id | pubmed-8241637 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Springer Netherlands |
record_format | MEDLINE/PubMed |
spelling | pubmed-82416372021-07-13 Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates? Lara, Francisco Sci Eng Ethics Original Research/Scholarship Can Artificial Intelligence (AI) be more effective than human instruction for the moral enhancement of people? The author argues that it only would be if the use of this technology were aimed at increasing the individual's capacity to reflectively decide for themselves, rather than at directly influencing behaviour. To support this, it is shown how a disregard for personal autonomy, in particular, invalidates the main proposals for applying new technologies, both biomedical and AI-based, to moral enhancement. As an alternative to these proposals, this article proposes a virtual assistant that, through dialogue, neutrality and virtual reality technologies, can teach users to make better moral decisions on their own. The author concludes that, as long as certain precautions are taken in its design, such an assistant could do this better than a human instructor adopting the same educational methodology. Springer Netherlands 2021-06-29 2021 /pmc/articles/PMC8241637/ /pubmed/34189623 http://dx.doi.org/10.1007/s11948-021-00318-5 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Original Research/Scholarship Lara, Francisco Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates? |
title | Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates? |
title_full | Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates? |
title_fullStr | Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates? |
title_full_unstemmed | Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates? |
title_short | Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates? |
title_sort | why a virtual assistant for moral enhancement when we could have a socrates? |
topic | Original Research/Scholarship |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8241637/ https://www.ncbi.nlm.nih.gov/pubmed/34189623 http://dx.doi.org/10.1007/s11948-021-00318-5 |
work_keys_str_mv | AT larafrancisco whyavirtualassistantformoralenhancementwhenwecouldhaveasocrates |