Cargando…

Agree to Disagree: Subjective Fairness in Privacy-Restricted Decentralised Conflict Resolution

Fairness is commonly seen as a property of the global outcome of a system and assumes centralisation and complete knowledge. However, in real decentralised applications, agents only have partial observation capabilities. Under limited information, agents rely on communication to divulge some of thei...

Descripción completa

Detalles Bibliográficos
Autores principales: Raymond , Alex, Malencia , Matthew, Paulino-Passos , Guilherme, Prorok , Amanda
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8891697/
https://www.ncbi.nlm.nih.gov/pubmed/35252361
http://dx.doi.org/10.3389/frobt.2022.733876
_version_ 1784661952535986176
author Raymond , Alex
Malencia , Matthew
Paulino-Passos , Guilherme
Prorok , Amanda
author_facet Raymond , Alex
Malencia , Matthew
Paulino-Passos , Guilherme
Prorok , Amanda
author_sort Raymond , Alex
collection PubMed
description Fairness is commonly seen as a property of the global outcome of a system and assumes centralisation and complete knowledge. However, in real decentralised applications, agents only have partial observation capabilities. Under limited information, agents rely on communication to divulge some of their private (and unobservable) information to others. When an agent deliberates to resolve conflicts, limited knowledge may cause its perspective of a correct outcome to differ from the actual outcome of the conflict resolution. This is subjective unfairness. As human systems and societies are organised by rules and norms, hybrid human-agent and multi-agent environments of the future will require agents to resolve conflicts in a decentralised and rule-aware way. Prior work achieves such decentralised, rule-aware conflict resolution through cultures: explainable architectures that embed human regulations and norms via argumentation frameworks with verification mechanisms. However, this prior work requires agents to have full state knowledge of each other, whereas many distributed applications in practice admit partial observation capabilities, which may require agents to communicate and carefully opt to release information if privacy constraints apply. To enable decentralised, fairness-aware conflict resolution under privacy constraints, we have two contributions: 1) a novel interaction approach and 2) a formalism of the relationship between privacy and fairness. Our proposed interaction approach is an architecture for privacy-aware explainable conflict resolution where agents engage in a dialogue of hypotheses and facts. To measure the privacy-fairness relationship, we define subjective and objective fairness on both the local and global scope and formalise the impact of partial observability due to privacy in these different notions of fairness. We first study our proposed architecture and the privacy-fairness relationship in the abstract, testing different argumentation strategies on a large number of randomised cultures. We empirically demonstrate the trade-off between privacy, objective fairness, and subjective fairness and show that better strategies can mitigate the effects of privacy in distributed systems. In addition to this analysis across a broad set of randomised abstract cultures, we analyse a case study for a specific scenario: we instantiate our architecture in a multi-agent simulation of prioritised rule-aware collision avoidance with limited information disclosure.
format Online
Article
Text
id pubmed-8891697
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-88916972022-03-04 Agree to Disagree: Subjective Fairness in Privacy-Restricted Decentralised Conflict Resolution Raymond , Alex Malencia , Matthew Paulino-Passos , Guilherme Prorok , Amanda Front Robot AI Robotics and AI Fairness is commonly seen as a property of the global outcome of a system and assumes centralisation and complete knowledge. However, in real decentralised applications, agents only have partial observation capabilities. Under limited information, agents rely on communication to divulge some of their private (and unobservable) information to others. When an agent deliberates to resolve conflicts, limited knowledge may cause its perspective of a correct outcome to differ from the actual outcome of the conflict resolution. This is subjective unfairness. As human systems and societies are organised by rules and norms, hybrid human-agent and multi-agent environments of the future will require agents to resolve conflicts in a decentralised and rule-aware way. Prior work achieves such decentralised, rule-aware conflict resolution through cultures: explainable architectures that embed human regulations and norms via argumentation frameworks with verification mechanisms. However, this prior work requires agents to have full state knowledge of each other, whereas many distributed applications in practice admit partial observation capabilities, which may require agents to communicate and carefully opt to release information if privacy constraints apply. To enable decentralised, fairness-aware conflict resolution under privacy constraints, we have two contributions: 1) a novel interaction approach and 2) a formalism of the relationship between privacy and fairness. Our proposed interaction approach is an architecture for privacy-aware explainable conflict resolution where agents engage in a dialogue of hypotheses and facts. To measure the privacy-fairness relationship, we define subjective and objective fairness on both the local and global scope and formalise the impact of partial observability due to privacy in these different notions of fairness. We first study our proposed architecture and the privacy-fairness relationship in the abstract, testing different argumentation strategies on a large number of randomised cultures. We empirically demonstrate the trade-off between privacy, objective fairness, and subjective fairness and show that better strategies can mitigate the effects of privacy in distributed systems. In addition to this analysis across a broad set of randomised abstract cultures, we analyse a case study for a specific scenario: we instantiate our architecture in a multi-agent simulation of prioritised rule-aware collision avoidance with limited information disclosure. Frontiers Media S.A. 2022-02-17 /pmc/articles/PMC8891697/ /pubmed/35252361 http://dx.doi.org/10.3389/frobt.2022.733876 Text en Copyright © 2022 Raymond , Malencia , Paulino-Passos  and Prorok . https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Robotics and AI
Raymond , Alex
Malencia , Matthew
Paulino-Passos , Guilherme
Prorok , Amanda
Agree to Disagree: Subjective Fairness in Privacy-Restricted Decentralised Conflict Resolution
title Agree to Disagree: Subjective Fairness in Privacy-Restricted Decentralised Conflict Resolution
title_full Agree to Disagree: Subjective Fairness in Privacy-Restricted Decentralised Conflict Resolution
title_fullStr Agree to Disagree: Subjective Fairness in Privacy-Restricted Decentralised Conflict Resolution
title_full_unstemmed Agree to Disagree: Subjective Fairness in Privacy-Restricted Decentralised Conflict Resolution
title_short Agree to Disagree: Subjective Fairness in Privacy-Restricted Decentralised Conflict Resolution
title_sort agree to disagree: subjective fairness in privacy-restricted decentralised conflict resolution
topic Robotics and AI
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8891697/
https://www.ncbi.nlm.nih.gov/pubmed/35252361
http://dx.doi.org/10.3389/frobt.2022.733876
work_keys_str_mv AT raymondalex agreetodisagreesubjectivefairnessinprivacyrestricteddecentralisedconflictresolution
AT malenciamatthew agreetodisagreesubjectivefairnessinprivacyrestricteddecentralisedconflictresolution
AT paulinopassosguilherme agreetodisagreesubjectivefairnessinprivacyrestricteddecentralisedconflictresolution
AT prorokamanda agreetodisagreesubjectivefairnessinprivacyrestricteddecentralisedconflictresolution