Cargando…

ChatGPT’s inconsistent moral advice influences users’ judgment

ChatGPT is not only fun to chat with, but it also searches information, answers questions, and gives advice. With consistent moral advice, it can improve the moral judgment and decisions of users. Unfortunately, ChatGPT’s advice is not consistent. Nonetheless, it does influence users’ moral judgment...

Descripción completa

Detalles Bibliográficos
Autores principales: Krügel, Sebastian, Ostermaier, Andreas, Uhl, Matthias
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10079665/
https://www.ncbi.nlm.nih.gov/pubmed/37024502
http://dx.doi.org/10.1038/s41598-023-31341-0
_version_ 1785020758278275072
author Krügel, Sebastian
Ostermaier, Andreas
Uhl, Matthias
author_facet Krügel, Sebastian
Ostermaier, Andreas
Uhl, Matthias
author_sort Krügel, Sebastian
collection PubMed
description ChatGPT is not only fun to chat with, but it also searches information, answers questions, and gives advice. With consistent moral advice, it can improve the moral judgment and decisions of users. Unfortunately, ChatGPT’s advice is not consistent. Nonetheless, it does influence users’ moral judgment, we find in an experiment, even if they know they are advised by a chatting bot, and they underestimate how much they are influenced. Thus, ChatGPT corrupts rather than improves its users’ moral judgment. While these findings call for better design of ChatGPT and similar bots, we also propose training to improve users’ digital literacy as a remedy. Transparency, however, is not sufficient to enable the responsible use of AI.
format Online
Article
Text
id pubmed-10079665
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-100796652023-04-08 ChatGPT’s inconsistent moral advice influences users’ judgment Krügel, Sebastian Ostermaier, Andreas Uhl, Matthias Sci Rep Article ChatGPT is not only fun to chat with, but it also searches information, answers questions, and gives advice. With consistent moral advice, it can improve the moral judgment and decisions of users. Unfortunately, ChatGPT’s advice is not consistent. Nonetheless, it does influence users’ moral judgment, we find in an experiment, even if they know they are advised by a chatting bot, and they underestimate how much they are influenced. Thus, ChatGPT corrupts rather than improves its users’ moral judgment. While these findings call for better design of ChatGPT and similar bots, we also propose training to improve users’ digital literacy as a remedy. Transparency, however, is not sufficient to enable the responsible use of AI. Nature Publishing Group UK 2023-04-06 /pmc/articles/PMC10079665/ /pubmed/37024502 http://dx.doi.org/10.1038/s41598-023-31341-0 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Krügel, Sebastian
Ostermaier, Andreas
Uhl, Matthias
ChatGPT’s inconsistent moral advice influences users’ judgment
title ChatGPT’s inconsistent moral advice influences users’ judgment
title_full ChatGPT’s inconsistent moral advice influences users’ judgment
title_fullStr ChatGPT’s inconsistent moral advice influences users’ judgment
title_full_unstemmed ChatGPT’s inconsistent moral advice influences users’ judgment
title_short ChatGPT’s inconsistent moral advice influences users’ judgment
title_sort chatgpt’s inconsistent moral advice influences users’ judgment
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10079665/
https://www.ncbi.nlm.nih.gov/pubmed/37024502
http://dx.doi.org/10.1038/s41598-023-31341-0
work_keys_str_mv AT krugelsebastian chatgptsinconsistentmoraladviceinfluencesusersjudgment
AT ostermaierandreas chatgptsinconsistentmoraladviceinfluencesusersjudgment
AT uhlmatthias chatgptsinconsistentmoraladviceinfluencesusersjudgment