Cargando…

In-Time Explainability in Multi-Agent Systems: Challenges, Opportunities, and Roadmap

In the race for automation, distributed systems are required to perform increasingly complex reasoning to deal with dynamic tasks, often not controlled by humans. On the one hand, systems dealing with strict-timing constraints in safety-critical applications mainly focused on predictability, leaving...

Descripción completa

Detalles Bibliográficos
Autores principales: Alzetta, Francesco, Giorgini, Paolo, Najjar, Amro, Schumacher, Michael I., Calvaresi, Davide
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7338180/
http://dx.doi.org/10.1007/978-3-030-51924-7_3
_version_ 1783554628339630080
author Alzetta, Francesco
Giorgini, Paolo
Najjar, Amro
Schumacher, Michael I.
Calvaresi, Davide
author_facet Alzetta, Francesco
Giorgini, Paolo
Najjar, Amro
Schumacher, Michael I.
Calvaresi, Davide
author_sort Alzetta, Francesco
collection PubMed
description In the race for automation, distributed systems are required to perform increasingly complex reasoning to deal with dynamic tasks, often not controlled by humans. On the one hand, systems dealing with strict-timing constraints in safety-critical applications mainly focused on predictability, leaving little room for complex planning and decision-making processes. Indeed, real-time techniques are very efficient in predetermined, constrained, and controlled scenarios. Nevertheless, they lack the necessary flexibility to operate in evolving settings, where the software needs to adapt to the changes of the environment. On the other hand, Intelligent Systems (IS) increasingly adopted Machine Learning (ML) techniques (e.g., subsymbolic predictors such as Neural Networks). The seminal application of those IS started in zero-risk domains producing revolutionary results. However, the ever-increasing exploitation of ML-based approaches generated opaque systems, which are nowadays no longer socially acceptable—calling for eXplainable AI (XAI). Such a problem is exacerbated when IS tend to approach safety-critical scenarios. This paper highlights the need for on-time explainability. In particular, it proposes to embrace the Real-Time Beliefs Desires Intentions (RT-BDI) framework as an enabler of eXplainable Multi-Agent Systems (XMAS) in time-critical XAI.
format Online
Article
Text
id pubmed-7338180
institution National Center for Biotechnology Information
language English
publishDate 2020
record_format MEDLINE/PubMed
spelling pubmed-73381802020-07-07 In-Time Explainability in Multi-Agent Systems: Challenges, Opportunities, and Roadmap Alzetta, Francesco Giorgini, Paolo Najjar, Amro Schumacher, Michael I. Calvaresi, Davide Explainable, Transparent Autonomous Agents and Multi-Agent Systems Article In the race for automation, distributed systems are required to perform increasingly complex reasoning to deal with dynamic tasks, often not controlled by humans. On the one hand, systems dealing with strict-timing constraints in safety-critical applications mainly focused on predictability, leaving little room for complex planning and decision-making processes. Indeed, real-time techniques are very efficient in predetermined, constrained, and controlled scenarios. Nevertheless, they lack the necessary flexibility to operate in evolving settings, where the software needs to adapt to the changes of the environment. On the other hand, Intelligent Systems (IS) increasingly adopted Machine Learning (ML) techniques (e.g., subsymbolic predictors such as Neural Networks). The seminal application of those IS started in zero-risk domains producing revolutionary results. However, the ever-increasing exploitation of ML-based approaches generated opaque systems, which are nowadays no longer socially acceptable—calling for eXplainable AI (XAI). Such a problem is exacerbated when IS tend to approach safety-critical scenarios. This paper highlights the need for on-time explainability. In particular, it proposes to embrace the Real-Time Beliefs Desires Intentions (RT-BDI) framework as an enabler of eXplainable Multi-Agent Systems (XMAS) in time-critical XAI. 2020-06-04 /pmc/articles/PMC7338180/ http://dx.doi.org/10.1007/978-3-030-51924-7_3 Text en © Springer Nature Switzerland AG 2020 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Article
Alzetta, Francesco
Giorgini, Paolo
Najjar, Amro
Schumacher, Michael I.
Calvaresi, Davide
In-Time Explainability in Multi-Agent Systems: Challenges, Opportunities, and Roadmap
title In-Time Explainability in Multi-Agent Systems: Challenges, Opportunities, and Roadmap
title_full In-Time Explainability in Multi-Agent Systems: Challenges, Opportunities, and Roadmap
title_fullStr In-Time Explainability in Multi-Agent Systems: Challenges, Opportunities, and Roadmap
title_full_unstemmed In-Time Explainability in Multi-Agent Systems: Challenges, Opportunities, and Roadmap
title_short In-Time Explainability in Multi-Agent Systems: Challenges, Opportunities, and Roadmap
title_sort in-time explainability in multi-agent systems: challenges, opportunities, and roadmap
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7338180/
http://dx.doi.org/10.1007/978-3-030-51924-7_3
work_keys_str_mv AT alzettafrancesco intimeexplainabilityinmultiagentsystemschallengesopportunitiesandroadmap
AT giorginipaolo intimeexplainabilityinmultiagentsystemschallengesopportunitiesandroadmap
AT najjaramro intimeexplainabilityinmultiagentsystemschallengesopportunitiesandroadmap
AT schumachermichaeli intimeexplainabilityinmultiagentsystemschallengesopportunitiesandroadmap
AT calvaresidavide intimeexplainabilityinmultiagentsystemschallengesopportunitiesandroadmap