Cargando…
Intelligence brings responsibility - Even smart AI assistants are held responsible
People will not hold cars responsible for traffic accidents, yet they do when artificial intelligence (AI) is involved. AI systems are held responsible when they act or merely advise a human agent. Does this mean that as soon as AI is involved responsibility follows? To find out, we examined whether...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Elsevier
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10440553/ https://www.ncbi.nlm.nih.gov/pubmed/37609629 http://dx.doi.org/10.1016/j.isci.2023.107494 |
Sumario: | People will not hold cars responsible for traffic accidents, yet they do when artificial intelligence (AI) is involved. AI systems are held responsible when they act or merely advise a human agent. Does this mean that as soon as AI is involved responsibility follows? To find out, we examined whether purely instrumental AI systems stay clear of responsibility. We compared AI-powered with non-AI-powered car warning systems and measured their responsibility rating alongside their human users. Our findings show that responsibility is shared when the warning system is powered by AI but not by a purely mechanical system, even though people consider both systems as mere tools. Surprisingly, whether the warning prevents the accident introduces an outcome bias: the AI takes higher credit than blame depending on what the human manages or fails to do. |
---|