Cargando…
How to feel about emotionalized artificial intelligence? When robot pets, holograms, and chatbots become affective partners
Interactions between humans and machines that include artificial intelligence are increasingly common in nearly all areas of life. Meanwhile, AI-products are increasingly endowed with emotional characteristics. That is, they are designed and trained to elicit emotions in humans, to recognize human e...
Autor principal: | |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer Netherlands
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8156584/ https://www.ncbi.nlm.nih.gov/pubmed/34075305 http://dx.doi.org/10.1007/s10676-021-09598-8 |
_version_ | 1783699479808966656 |
---|---|
author | Weber-Guskar, Eva |
author_facet | Weber-Guskar, Eva |
author_sort | Weber-Guskar, Eva |
collection | PubMed |
description | Interactions between humans and machines that include artificial intelligence are increasingly common in nearly all areas of life. Meanwhile, AI-products are increasingly endowed with emotional characteristics. That is, they are designed and trained to elicit emotions in humans, to recognize human emotions and, sometimes, to simulate emotions (EAI). The introduction of such systems in our lives is met with some criticism. There is a rather strong intuition that there is something wrong about getting attached to a machine, about having certain emotions towards it, and about getting involved in a kind of affective relationship with it. In this paper, I want to tackle these worries by focusing on the last aspect: in what sense could it be problematic or even wrong to establish an emotional relationship with EAI-systems? I want to show that the justifications for the widespread intuition concerning the problems are not as strong as they seem at first sight. To do so, I discuss three arguments: the argument from self-deception, the argument from lack of mutuality, and the argument from moral negligence. |
format | Online Article Text |
id | pubmed-8156584 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Springer Netherlands |
record_format | MEDLINE/PubMed |
spelling | pubmed-81565842021-05-28 How to feel about emotionalized artificial intelligence? When robot pets, holograms, and chatbots become affective partners Weber-Guskar, Eva Ethics Inf Technol Original Paper Interactions between humans and machines that include artificial intelligence are increasingly common in nearly all areas of life. Meanwhile, AI-products are increasingly endowed with emotional characteristics. That is, they are designed and trained to elicit emotions in humans, to recognize human emotions and, sometimes, to simulate emotions (EAI). The introduction of such systems in our lives is met with some criticism. There is a rather strong intuition that there is something wrong about getting attached to a machine, about having certain emotions towards it, and about getting involved in a kind of affective relationship with it. In this paper, I want to tackle these worries by focusing on the last aspect: in what sense could it be problematic or even wrong to establish an emotional relationship with EAI-systems? I want to show that the justifications for the widespread intuition concerning the problems are not as strong as they seem at first sight. To do so, I discuss three arguments: the argument from self-deception, the argument from lack of mutuality, and the argument from moral negligence. Springer Netherlands 2021-05-27 2021 /pmc/articles/PMC8156584/ /pubmed/34075305 http://dx.doi.org/10.1007/s10676-021-09598-8 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Original Paper Weber-Guskar, Eva How to feel about emotionalized artificial intelligence? When robot pets, holograms, and chatbots become affective partners |
title | How to feel about emotionalized artificial intelligence? When robot pets, holograms, and chatbots become affective partners |
title_full | How to feel about emotionalized artificial intelligence? When robot pets, holograms, and chatbots become affective partners |
title_fullStr | How to feel about emotionalized artificial intelligence? When robot pets, holograms, and chatbots become affective partners |
title_full_unstemmed | How to feel about emotionalized artificial intelligence? When robot pets, holograms, and chatbots become affective partners |
title_short | How to feel about emotionalized artificial intelligence? When robot pets, holograms, and chatbots become affective partners |
title_sort | how to feel about emotionalized artificial intelligence? when robot pets, holograms, and chatbots become affective partners |
topic | Original Paper |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8156584/ https://www.ncbi.nlm.nih.gov/pubmed/34075305 http://dx.doi.org/10.1007/s10676-021-09598-8 |
work_keys_str_mv | AT weberguskareva howtofeelaboutemotionalizedartificialintelligencewhenrobotpetshologramsandchatbotsbecomeaffectivepartners |