Cargando…

Models of Intervention: Helping Agents and Human Users Avoid Undesirable Outcomes

When working in an unfamiliar online environment, it can be helpful to have an observer that can intervene and guide a user toward a desirable outcome while avoiding undesirable outcomes or frustration. The Intervention Problem is deciding when to intervene in order to help a user. The Intervention...

Descripción completa

Detalles Bibliográficos
Autores principales: Weerawardhana, Sachini, Whitley, Darrell, Roberts, Mark
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8851243/
https://www.ncbi.nlm.nih.gov/pubmed/35187470
http://dx.doi.org/10.3389/frai.2021.723936
_version_ 1784652781164953600
author Weerawardhana, Sachini
Whitley, Darrell
Roberts, Mark
author_facet Weerawardhana, Sachini
Whitley, Darrell
Roberts, Mark
author_sort Weerawardhana, Sachini
collection PubMed
description When working in an unfamiliar online environment, it can be helpful to have an observer that can intervene and guide a user toward a desirable outcome while avoiding undesirable outcomes or frustration. The Intervention Problem is deciding when to intervene in order to help a user. The Intervention Problem is similar to, but distinct from, Plan Recognition because the observer must not only recognize the intended goals of a user but also when to intervene to help the user when necessary. We formalize a family of Intervention Problems and show that how these problems can be solved using a combination of Plan Recognition methods and classification algorithms to decide whether to intervene. For our benchmarks, the classification algorithms dominate three recent Plan Recognition approaches. We then generalize these results to Human-Aware Intervention, where the observer must decide in real time whether to intervene human users solving a cognitively engaging puzzle. Using a revised feature set more appropriate to human behavior, we produce a learned model to recognize when a human user is about to trigger an undesirable outcome. We perform a human-subject study to evaluate the Human-Aware Intervention. We find that the revised model also dominates existing Plan Recognition algorithms in predicting Human-Aware Intervention.
format Online
Article
Text
id pubmed-8851243
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-88512432022-02-18 Models of Intervention: Helping Agents and Human Users Avoid Undesirable Outcomes Weerawardhana, Sachini Whitley, Darrell Roberts, Mark Front Artif Intell Artificial Intelligence When working in an unfamiliar online environment, it can be helpful to have an observer that can intervene and guide a user toward a desirable outcome while avoiding undesirable outcomes or frustration. The Intervention Problem is deciding when to intervene in order to help a user. The Intervention Problem is similar to, but distinct from, Plan Recognition because the observer must not only recognize the intended goals of a user but also when to intervene to help the user when necessary. We formalize a family of Intervention Problems and show that how these problems can be solved using a combination of Plan Recognition methods and classification algorithms to decide whether to intervene. For our benchmarks, the classification algorithms dominate three recent Plan Recognition approaches. We then generalize these results to Human-Aware Intervention, where the observer must decide in real time whether to intervene human users solving a cognitively engaging puzzle. Using a revised feature set more appropriate to human behavior, we produce a learned model to recognize when a human user is about to trigger an undesirable outcome. We perform a human-subject study to evaluate the Human-Aware Intervention. We find that the revised model also dominates existing Plan Recognition algorithms in predicting Human-Aware Intervention. Frontiers Media S.A. 2022-02-03 /pmc/articles/PMC8851243/ /pubmed/35187470 http://dx.doi.org/10.3389/frai.2021.723936 Text en Copyright © 2022 Weerawardhana, Whitley and Roberts. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Artificial Intelligence
Weerawardhana, Sachini
Whitley, Darrell
Roberts, Mark
Models of Intervention: Helping Agents and Human Users Avoid Undesirable Outcomes
title Models of Intervention: Helping Agents and Human Users Avoid Undesirable Outcomes
title_full Models of Intervention: Helping Agents and Human Users Avoid Undesirable Outcomes
title_fullStr Models of Intervention: Helping Agents and Human Users Avoid Undesirable Outcomes
title_full_unstemmed Models of Intervention: Helping Agents and Human Users Avoid Undesirable Outcomes
title_short Models of Intervention: Helping Agents and Human Users Avoid Undesirable Outcomes
title_sort models of intervention: helping agents and human users avoid undesirable outcomes
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8851243/
https://www.ncbi.nlm.nih.gov/pubmed/35187470
http://dx.doi.org/10.3389/frai.2021.723936
work_keys_str_mv AT weerawardhanasachini modelsofinterventionhelpingagentsandhumanusersavoidundesirableoutcomes
AT whitleydarrell modelsofinterventionhelpingagentsandhumanusersavoidundesirableoutcomes
AT robertsmark modelsofinterventionhelpingagentsandhumanusersavoidundesirableoutcomes