Cargando…

Counterfactual learning in enhancing resilience in autonomous agent systems

Resilience in autonomous agent systems is about having the capacity to anticipate, respond to, adapt to, and recover from adverse and dynamic conditions in complex environments. It is associated with the intelligence possessed by the agents to preserve the functionality or to minimize the impact on...

Descripción completa

Detalles Bibliográficos
Autor principal: Samarasinghe, Dilini
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10419171/
https://www.ncbi.nlm.nih.gov/pubmed/37575207
http://dx.doi.org/10.3389/frai.2023.1212336
_version_ 1785088450516484096
author Samarasinghe, Dilini
author_facet Samarasinghe, Dilini
author_sort Samarasinghe, Dilini
collection PubMed
description Resilience in autonomous agent systems is about having the capacity to anticipate, respond to, adapt to, and recover from adverse and dynamic conditions in complex environments. It is associated with the intelligence possessed by the agents to preserve the functionality or to minimize the impact on functionality through a transformation, reconfiguration, or expansion performed across the system. Enhancing the resilience of systems could pave way toward higher autonomy allowing them to tackle intricate dynamic problems. The state-of-the-art systems have mostly focussed on improving the redundancy of the system, adopting decentralized control architectures, and utilizing distributed sensing capabilities. While machine learning approaches for efficient distribution and allocation of skills and tasks have enhanced the potential of these systems, they are still limited when presented with dynamic environments. To move beyond the current limitations, this paper advocates incorporating counterfactual learning models for agents to enable them with the ability to predict possible future conditions and adjust their behavior. Counterfactual learning is a topic that has recently been gaining attention as a model-agnostic and post-hoc technique to improve explainability in machine learning models. Using counterfactual causality can also help gain insights into unforeseen circumstances and make inferences about the probability of desired outcomes. We propose that this can be used in agent systems as a means to guide and prepare them to cope with unanticipated environmental conditions. This supplementary support for adaptation can enable the design of more intelligent and complex autonomous agent systems to address the multifaceted characteristics of real-world problem domains.
format Online
Article
Text
id pubmed-10419171
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-104191712023-08-12 Counterfactual learning in enhancing resilience in autonomous agent systems Samarasinghe, Dilini Front Artif Intell Artificial Intelligence Resilience in autonomous agent systems is about having the capacity to anticipate, respond to, adapt to, and recover from adverse and dynamic conditions in complex environments. It is associated with the intelligence possessed by the agents to preserve the functionality or to minimize the impact on functionality through a transformation, reconfiguration, or expansion performed across the system. Enhancing the resilience of systems could pave way toward higher autonomy allowing them to tackle intricate dynamic problems. The state-of-the-art systems have mostly focussed on improving the redundancy of the system, adopting decentralized control architectures, and utilizing distributed sensing capabilities. While machine learning approaches for efficient distribution and allocation of skills and tasks have enhanced the potential of these systems, they are still limited when presented with dynamic environments. To move beyond the current limitations, this paper advocates incorporating counterfactual learning models for agents to enable them with the ability to predict possible future conditions and adjust their behavior. Counterfactual learning is a topic that has recently been gaining attention as a model-agnostic and post-hoc technique to improve explainability in machine learning models. Using counterfactual causality can also help gain insights into unforeseen circumstances and make inferences about the probability of desired outcomes. We propose that this can be used in agent systems as a means to guide and prepare them to cope with unanticipated environmental conditions. This supplementary support for adaptation can enable the design of more intelligent and complex autonomous agent systems to address the multifaceted characteristics of real-world problem domains. Frontiers Media S.A. 2023-07-28 /pmc/articles/PMC10419171/ /pubmed/37575207 http://dx.doi.org/10.3389/frai.2023.1212336 Text en Copyright © 2023 Samarasinghe. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Artificial Intelligence
Samarasinghe, Dilini
Counterfactual learning in enhancing resilience in autonomous agent systems
title Counterfactual learning in enhancing resilience in autonomous agent systems
title_full Counterfactual learning in enhancing resilience in autonomous agent systems
title_fullStr Counterfactual learning in enhancing resilience in autonomous agent systems
title_full_unstemmed Counterfactual learning in enhancing resilience in autonomous agent systems
title_short Counterfactual learning in enhancing resilience in autonomous agent systems
title_sort counterfactual learning in enhancing resilience in autonomous agent systems
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10419171/
https://www.ncbi.nlm.nih.gov/pubmed/37575207
http://dx.doi.org/10.3389/frai.2023.1212336
work_keys_str_mv AT samarasinghedilini counterfactuallearninginenhancingresilienceinautonomousagentsystems