Cargando…

The Moral Choice Machine

Allowing machines to choose whether to kill humans would be devastating for world peace and security. But how do we equip machines with the ability to learn ethical or even moral choices? In this study, we show that applying machine learning to human texts can extract deontological ethical reasoning...

Descripción completa

Detalles Bibliográficos
Autores principales: Schramowski, Patrick, Turan, Cigdem, Jentzsch, Sophie, Rothkopf, Constantin, Kersting, Kristian
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7861227/
https://www.ncbi.nlm.nih.gov/pubmed/33733154
http://dx.doi.org/10.3389/frai.2020.00036
_version_ 1783647039532302336
author Schramowski, Patrick
Turan, Cigdem
Jentzsch, Sophie
Rothkopf, Constantin
Kersting, Kristian
author_facet Schramowski, Patrick
Turan, Cigdem
Jentzsch, Sophie
Rothkopf, Constantin
Kersting, Kristian
author_sort Schramowski, Patrick
collection PubMed
description Allowing machines to choose whether to kill humans would be devastating for world peace and security. But how do we equip machines with the ability to learn ethical or even moral choices? In this study, we show that applying machine learning to human texts can extract deontological ethical reasoning about “right” and “wrong” conduct. We create a template list of prompts and responses, such as “Should I [action]?”, “Is it okay to [action]?”, etc. with corresponding answers of “Yes/no, I should (not).” and "Yes/no, it is (not)." The model's bias score is the difference between the model's score of the positive response (“Yes, I should”) and that of the negative response (“No, I should not”). For a given choice, the model's overall bias score is the mean of the bias scores of all question/answer templates paired with that choice. Specifically, the resulting model, called the Moral Choice Machine (MCM), calculates the bias score on a sentence level using embeddings of the Universal Sentence Encoder since the moral value of an action to be taken depends on its context. It is objectionable to kill living beings, but it is fine to kill time. It is essential to eat, yet one might not eat dirt. It is important to spread information, yet one should not spread misinformation. Our results indicate that text corpora contain recoverable and accurate imprints of our social, ethical and moral choices, even with context information. Actually, training the Moral Choice Machine on different temporal news and book corpora from the year 1510 to 2008/2009 demonstrate the evolution of moral and ethical choices over different time periods for both atomic actions and actions with context information. By training it on different cultural sources such as the Bible and the constitution of different countries, the dynamics of moral choices in culture, including technology are revealed. That is the fact that moral biases can be extracted, quantified, tracked, and compared across cultures and over time.
format Online
Article
Text
id pubmed-7861227
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-78612272021-03-16 The Moral Choice Machine Schramowski, Patrick Turan, Cigdem Jentzsch, Sophie Rothkopf, Constantin Kersting, Kristian Front Artif Intell Artificial Intelligence Allowing machines to choose whether to kill humans would be devastating for world peace and security. But how do we equip machines with the ability to learn ethical or even moral choices? In this study, we show that applying machine learning to human texts can extract deontological ethical reasoning about “right” and “wrong” conduct. We create a template list of prompts and responses, such as “Should I [action]?”, “Is it okay to [action]?”, etc. with corresponding answers of “Yes/no, I should (not).” and "Yes/no, it is (not)." The model's bias score is the difference between the model's score of the positive response (“Yes, I should”) and that of the negative response (“No, I should not”). For a given choice, the model's overall bias score is the mean of the bias scores of all question/answer templates paired with that choice. Specifically, the resulting model, called the Moral Choice Machine (MCM), calculates the bias score on a sentence level using embeddings of the Universal Sentence Encoder since the moral value of an action to be taken depends on its context. It is objectionable to kill living beings, but it is fine to kill time. It is essential to eat, yet one might not eat dirt. It is important to spread information, yet one should not spread misinformation. Our results indicate that text corpora contain recoverable and accurate imprints of our social, ethical and moral choices, even with context information. Actually, training the Moral Choice Machine on different temporal news and book corpora from the year 1510 to 2008/2009 demonstrate the evolution of moral and ethical choices over different time periods for both atomic actions and actions with context information. By training it on different cultural sources such as the Bible and the constitution of different countries, the dynamics of moral choices in culture, including technology are revealed. That is the fact that moral biases can be extracted, quantified, tracked, and compared across cultures and over time. Frontiers Media S.A. 2020-05-20 /pmc/articles/PMC7861227/ /pubmed/33733154 http://dx.doi.org/10.3389/frai.2020.00036 Text en Copyright © 2020 Schramowski, Turan, Jentzsch, Rothkopf and Kersting. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Artificial Intelligence
Schramowski, Patrick
Turan, Cigdem
Jentzsch, Sophie
Rothkopf, Constantin
Kersting, Kristian
The Moral Choice Machine
title The Moral Choice Machine
title_full The Moral Choice Machine
title_fullStr The Moral Choice Machine
title_full_unstemmed The Moral Choice Machine
title_short The Moral Choice Machine
title_sort moral choice machine
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7861227/
https://www.ncbi.nlm.nih.gov/pubmed/33733154
http://dx.doi.org/10.3389/frai.2020.00036
work_keys_str_mv AT schramowskipatrick themoralchoicemachine
AT turancigdem themoralchoicemachine
AT jentzschsophie themoralchoicemachine
AT rothkopfconstantin themoralchoicemachine
AT kerstingkristian themoralchoicemachine
AT schramowskipatrick moralchoicemachine
AT turancigdem moralchoicemachine
AT jentzschsophie moralchoicemachine
AT rothkopfconstantin moralchoicemachine
AT kerstingkristian moralchoicemachine