Cargando…

Trust as Extended Control: Human-Machine Interactions as Active Inference

In order to interact seamlessly with robots, users must infer the causes of a robot’s behavior–and be confident about that inference (and its predictions). Hence, trust is a necessary condition for human-robot collaboration (HRC). However, and despite its crucial role, it is still largely unknown ho...

Descripción completa

Detalles Bibliográficos
Autores principales: Schoeller, Felix, Miller, Mark, Salomon, Roy, Friston, Karl J.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8548360/
https://www.ncbi.nlm.nih.gov/pubmed/34720895
http://dx.doi.org/10.3389/fnsys.2021.669810
_version_ 1784590556278554624
author Schoeller, Felix
Miller, Mark
Salomon, Roy
Friston, Karl J.
author_facet Schoeller, Felix
Miller, Mark
Salomon, Roy
Friston, Karl J.
author_sort Schoeller, Felix
collection PubMed
description In order to interact seamlessly with robots, users must infer the causes of a robot’s behavior–and be confident about that inference (and its predictions). Hence, trust is a necessary condition for human-robot collaboration (HRC). However, and despite its crucial role, it is still largely unknown how trust emerges, develops, and supports human relationship to technological systems. In the following paper we review the literature on trust, human-robot interaction, HRC, and human interaction at large. Early models of trust suggest that it is a trade-off between benevolence and competence; while studies of human to human interaction emphasize the role of shared behavior and mutual knowledge in the gradual building of trust. We go on to introduce a model of trust as an agent’ best explanation for reliable sensory exchange with an extended motor plant or partner. This model is based on the cognitive neuroscience of active inference and suggests that, in the context of HRC, trust can be casted in terms of virtual control over an artificial agent. Interactive feedback is a necessary condition to the extension of the trustor’s perception-action cycle. This model has important implications for understanding human-robot interaction and collaboration–as it allows the traditional determinants of human trust, such as the benevolence and competence attributed to the trustee, to be defined in terms of hierarchical active inference, while vulnerability can be described in terms of information exchange and empowerment. Furthermore, this model emphasizes the role of user feedback during HRC and suggests that boredom and surprise may be used in personalized interactions as markers for under and over-reliance on the system. The description of trust as a sense of virtual control offers a crucial step toward grounding human factors in cognitive neuroscience and improving the design of human-centered technology. Furthermore, we examine the role of shared behavior in the genesis of trust, especially in the context of dyadic collaboration, suggesting important consequences for the acceptability and design of human-robot collaborative systems.
format Online
Article
Text
id pubmed-8548360
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-85483602021-10-28 Trust as Extended Control: Human-Machine Interactions as Active Inference Schoeller, Felix Miller, Mark Salomon, Roy Friston, Karl J. Front Syst Neurosci Systems Neuroscience In order to interact seamlessly with robots, users must infer the causes of a robot’s behavior–and be confident about that inference (and its predictions). Hence, trust is a necessary condition for human-robot collaboration (HRC). However, and despite its crucial role, it is still largely unknown how trust emerges, develops, and supports human relationship to technological systems. In the following paper we review the literature on trust, human-robot interaction, HRC, and human interaction at large. Early models of trust suggest that it is a trade-off between benevolence and competence; while studies of human to human interaction emphasize the role of shared behavior and mutual knowledge in the gradual building of trust. We go on to introduce a model of trust as an agent’ best explanation for reliable sensory exchange with an extended motor plant or partner. This model is based on the cognitive neuroscience of active inference and suggests that, in the context of HRC, trust can be casted in terms of virtual control over an artificial agent. Interactive feedback is a necessary condition to the extension of the trustor’s perception-action cycle. This model has important implications for understanding human-robot interaction and collaboration–as it allows the traditional determinants of human trust, such as the benevolence and competence attributed to the trustee, to be defined in terms of hierarchical active inference, while vulnerability can be described in terms of information exchange and empowerment. Furthermore, this model emphasizes the role of user feedback during HRC and suggests that boredom and surprise may be used in personalized interactions as markers for under and over-reliance on the system. The description of trust as a sense of virtual control offers a crucial step toward grounding human factors in cognitive neuroscience and improving the design of human-centered technology. Furthermore, we examine the role of shared behavior in the genesis of trust, especially in the context of dyadic collaboration, suggesting important consequences for the acceptability and design of human-robot collaborative systems. Frontiers Media S.A. 2021-10-13 /pmc/articles/PMC8548360/ /pubmed/34720895 http://dx.doi.org/10.3389/fnsys.2021.669810 Text en Copyright © 2021 Schoeller, Miller, Salomon and Friston. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Systems Neuroscience
Schoeller, Felix
Miller, Mark
Salomon, Roy
Friston, Karl J.
Trust as Extended Control: Human-Machine Interactions as Active Inference
title Trust as Extended Control: Human-Machine Interactions as Active Inference
title_full Trust as Extended Control: Human-Machine Interactions as Active Inference
title_fullStr Trust as Extended Control: Human-Machine Interactions as Active Inference
title_full_unstemmed Trust as Extended Control: Human-Machine Interactions as Active Inference
title_short Trust as Extended Control: Human-Machine Interactions as Active Inference
title_sort trust as extended control: human-machine interactions as active inference
topic Systems Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8548360/
https://www.ncbi.nlm.nih.gov/pubmed/34720895
http://dx.doi.org/10.3389/fnsys.2021.669810
work_keys_str_mv AT schoellerfelix trustasextendedcontrolhumanmachineinteractionsasactiveinference
AT millermark trustasextendedcontrolhumanmachineinteractionsasactiveinference
AT salomonroy trustasextendedcontrolhumanmachineinteractionsasactiveinference
AT fristonkarlj trustasextendedcontrolhumanmachineinteractionsasactiveinference