Cargando…
An unethical optimization principle
If an artificial intelligence aims to maximize risk-adjusted return, then under mild conditions it is disproportionately likely to pick an unethical strategy unless the objective function allows sufficiently for this risk. Even if the proportion η of available unethical strategies is small, the prob...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
The Royal Society
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7428226/ https://www.ncbi.nlm.nih.gov/pubmed/32874640 http://dx.doi.org/10.1098/rsos.200462 |
Sumario: | If an artificial intelligence aims to maximize risk-adjusted return, then under mild conditions it is disproportionately likely to pick an unethical strategy unless the objective function allows sufficiently for this risk. Even if the proportion η of available unethical strategies is small, the probability p(U) of picking an unethical strategy can become large; indeed, unless returns are fat-tailed p(U) tends to unity as the strategy space becomes large. We define an unethical odds ratio, [Formula: see text] (capital upsilon), that allows us to calculate p(U) from η, and we derive a simple formula for the limit of [Formula: see text] as the strategy space becomes large. We discuss the estimation of [Formula: see text] and p(U) in finite cases and how to deal with infinite strategy spaces. We show how the principle can be used to help detect unethical strategies and to estimate η. Finally we sketch some policy implications of this work. |
---|