Cargando…
Human-centred mechanism design with Democratic AI
Building artificial intelligence (AI) that aligns with human values is an unsolved problem. Here we developed a human-in-the-loop research pipeline called Democratic AI, in which reinforcement learning is used to design a social mechanism that humans prefer by majority. A large group of humans playe...
Autores principales: | , , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9584820/ https://www.ncbi.nlm.nih.gov/pubmed/35789321 http://dx.doi.org/10.1038/s41562-022-01383-x |
_version_ | 1784813358006927360 |
---|---|
author | Koster, Raphael Balaguer, Jan Tacchetti, Andrea Weinstein, Ari Zhu, Tina Hauser, Oliver Williams, Duncan Campbell-Gillingham, Lucy Thacker, Phoebe Botvinick, Matthew Summerfield, Christopher |
author_facet | Koster, Raphael Balaguer, Jan Tacchetti, Andrea Weinstein, Ari Zhu, Tina Hauser, Oliver Williams, Duncan Campbell-Gillingham, Lucy Thacker, Phoebe Botvinick, Matthew Summerfield, Christopher |
author_sort | Koster, Raphael |
collection | PubMed |
description | Building artificial intelligence (AI) that aligns with human values is an unsolved problem. Here we developed a human-in-the-loop research pipeline called Democratic AI, in which reinforcement learning is used to design a social mechanism that humans prefer by majority. A large group of humans played an online investment game that involved deciding whether to keep a monetary endowment or to share it with others for collective benefit. Shared revenue was returned to players under two different redistribution mechanisms, one designed by the AI and the other by humans. The AI discovered a mechanism that redressed initial wealth imbalance, sanctioned free riders and successfully won the majority vote. By optimizing for human preferences, Democratic AI offers a proof of concept for value-aligned policy innovation. |
format | Online Article Text |
id | pubmed-9584820 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-95848202022-10-22 Human-centred mechanism design with Democratic AI Koster, Raphael Balaguer, Jan Tacchetti, Andrea Weinstein, Ari Zhu, Tina Hauser, Oliver Williams, Duncan Campbell-Gillingham, Lucy Thacker, Phoebe Botvinick, Matthew Summerfield, Christopher Nat Hum Behav Article Building artificial intelligence (AI) that aligns with human values is an unsolved problem. Here we developed a human-in-the-loop research pipeline called Democratic AI, in which reinforcement learning is used to design a social mechanism that humans prefer by majority. A large group of humans played an online investment game that involved deciding whether to keep a monetary endowment or to share it with others for collective benefit. Shared revenue was returned to players under two different redistribution mechanisms, one designed by the AI and the other by humans. The AI discovered a mechanism that redressed initial wealth imbalance, sanctioned free riders and successfully won the majority vote. By optimizing for human preferences, Democratic AI offers a proof of concept for value-aligned policy innovation. Nature Publishing Group UK 2022-07-04 2022 /pmc/articles/PMC9584820/ /pubmed/35789321 http://dx.doi.org/10.1038/s41562-022-01383-x Text en © The Author(s) 2022, corrected publication 2022 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Koster, Raphael Balaguer, Jan Tacchetti, Andrea Weinstein, Ari Zhu, Tina Hauser, Oliver Williams, Duncan Campbell-Gillingham, Lucy Thacker, Phoebe Botvinick, Matthew Summerfield, Christopher Human-centred mechanism design with Democratic AI |
title | Human-centred mechanism design with Democratic AI |
title_full | Human-centred mechanism design with Democratic AI |
title_fullStr | Human-centred mechanism design with Democratic AI |
title_full_unstemmed | Human-centred mechanism design with Democratic AI |
title_short | Human-centred mechanism design with Democratic AI |
title_sort | human-centred mechanism design with democratic ai |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9584820/ https://www.ncbi.nlm.nih.gov/pubmed/35789321 http://dx.doi.org/10.1038/s41562-022-01383-x |
work_keys_str_mv | AT kosterraphael humancentredmechanismdesignwithdemocraticai AT balaguerjan humancentredmechanismdesignwithdemocraticai AT tacchettiandrea humancentredmechanismdesignwithdemocraticai AT weinsteinari humancentredmechanismdesignwithdemocraticai AT zhutina humancentredmechanismdesignwithdemocraticai AT hauseroliver humancentredmechanismdesignwithdemocraticai AT williamsduncan humancentredmechanismdesignwithdemocraticai AT campbellgillinghamlucy humancentredmechanismdesignwithdemocraticai AT thackerphoebe humancentredmechanismdesignwithdemocraticai AT botvinickmatthew humancentredmechanismdesignwithdemocraticai AT summerfieldchristopher humancentredmechanismdesignwithdemocraticai |