Cargando…
Immune moral models? Pro-social rule breaking as a moral enhancement approach for ethical AI
We are moving towards a future where Artificial Intelligence (AI) based agents make many decisions on behalf of humans. From healthcare decision-making to social media censoring, these agents face problems, and make decisions with ethical and societal implications. Ethical behaviour is a critical ch...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer London
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9125349/ https://www.ncbi.nlm.nih.gov/pubmed/35645466 http://dx.doi.org/10.1007/s00146-022-01478-z |
Sumario: | We are moving towards a future where Artificial Intelligence (AI) based agents make many decisions on behalf of humans. From healthcare decision-making to social media censoring, these agents face problems, and make decisions with ethical and societal implications. Ethical behaviour is a critical characteristic that we would like in a human-centric AI. A common observation in human-centric industries, like the service industry and healthcare, is that their professionals tend to break rules, if necessary, for pro-social reasons. This behaviour among humans is defined as pro-social rule breaking. To make AI agents more human-centric, we argue that there is a need for a mechanism that helps AI agents identify when to break rules set by their designers. To understand when AI agents need to break rules, we examine the conditions under which humans break rules for pro-social reasons. In this paper, we present a study that introduces a ‘vaccination strategy dilemma’ to human participants and analyzes their response. In this dilemma, one needs to decide whether they would distribute COVID-19 vaccines only to members of a high-risk group (follow the enforced rule) or, in selected cases, administer the vaccine to a few social influencers (break the rule), which might yield an overall greater benefit to society. The results of the empirical study suggest a relationship between stakeholder utilities and pro-social rule breaking (PSRB), which neither deontological nor utilitarian ethics completely explain. Finally, the paper discusses the design characteristics of an ethical agent capable of PSRB and the future research directions on PSRB in the AI realm. We hope that this will inform the design of future AI agents, and their decision-making behaviour. |
---|