Cargando…
Trust in the Danger Zone: Individual Differences in Confidence in Robot Threat Assessments
Effective human–robot teaming (HRT) increasingly requires humans to work with intelligent, autonomous machines. However, novel features of intelligent autonomous systems such as social agency and incomprehensibility may influence the human’s trust in the machine. The human operator’s mental model fo...
Autores principales: | , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9008327/ https://www.ncbi.nlm.nih.gov/pubmed/35432066 http://dx.doi.org/10.3389/fpsyg.2022.601523 |
_version_ | 1784687027640336384 |
---|---|
author | Lin, Jinchao Panganiban, April Rose Matthews, Gerald Gibbins, Katey Ankeney, Emily See, Carlie Bailey, Rachel Long, Michael |
author_facet | Lin, Jinchao Panganiban, April Rose Matthews, Gerald Gibbins, Katey Ankeney, Emily See, Carlie Bailey, Rachel Long, Michael |
author_sort | Lin, Jinchao |
collection | PubMed |
description | Effective human–robot teaming (HRT) increasingly requires humans to work with intelligent, autonomous machines. However, novel features of intelligent autonomous systems such as social agency and incomprehensibility may influence the human’s trust in the machine. The human operator’s mental model for machine functioning is critical for trust. People may consider an intelligent machine partner as either an advanced tool or as a human-like teammate. This article reports a study that explored the role of individual differences in the mental model in a simulated environment. Multiple dispositional factors that may influence the dominant mental model were assessed. These included the Robot Threat Assessment (RoTA), which measures the person’s propensity to apply tool and teammate models in security contexts. Participants (N = 118) were paired with an intelligent robot tasked with making threat assessments in an urban setting. A transparency manipulation was used to influence the dominant mental model. For half of the participants, threat assessment was described as physics-based (e.g., weapons sensed by sensors); the remainder received transparency information that described psychological cues (e.g., facial expression). We expected that the physics-based transparency messages would guide the participant toward treating the robot as an advanced machine (advanced tool mental model activation), while psychological messaging would encourage perceptions of the robot as acting like a human partner (teammate mental model). We also manipulated situational danger cues present in the simulated environment. Participants rated their trust in the robot’s decision as well as threat and anxiety, for each of 24 urban scenes. They also completed the RoTA and additional individual-difference measures. Findings showed that trust assessments reflected the degree of congruence between the robot’s decision and situational danger cues, consistent with participants acting as Bayesian decision makers. Several scales, including the RoTA, were more predictive of trust when the robot was making psychology-based decisions, implying that trust reflected individual differences in the mental model of the robot as a teammate. These findings suggest scope for designing training that uncovers and mitigates the individual’s biases toward intelligent machines. |
format | Online Article Text |
id | pubmed-9008327 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-90083272022-04-15 Trust in the Danger Zone: Individual Differences in Confidence in Robot Threat Assessments Lin, Jinchao Panganiban, April Rose Matthews, Gerald Gibbins, Katey Ankeney, Emily See, Carlie Bailey, Rachel Long, Michael Front Psychol Psychology Effective human–robot teaming (HRT) increasingly requires humans to work with intelligent, autonomous machines. However, novel features of intelligent autonomous systems such as social agency and incomprehensibility may influence the human’s trust in the machine. The human operator’s mental model for machine functioning is critical for trust. People may consider an intelligent machine partner as either an advanced tool or as a human-like teammate. This article reports a study that explored the role of individual differences in the mental model in a simulated environment. Multiple dispositional factors that may influence the dominant mental model were assessed. These included the Robot Threat Assessment (RoTA), which measures the person’s propensity to apply tool and teammate models in security contexts. Participants (N = 118) were paired with an intelligent robot tasked with making threat assessments in an urban setting. A transparency manipulation was used to influence the dominant mental model. For half of the participants, threat assessment was described as physics-based (e.g., weapons sensed by sensors); the remainder received transparency information that described psychological cues (e.g., facial expression). We expected that the physics-based transparency messages would guide the participant toward treating the robot as an advanced machine (advanced tool mental model activation), while psychological messaging would encourage perceptions of the robot as acting like a human partner (teammate mental model). We also manipulated situational danger cues present in the simulated environment. Participants rated their trust in the robot’s decision as well as threat and anxiety, for each of 24 urban scenes. They also completed the RoTA and additional individual-difference measures. Findings showed that trust assessments reflected the degree of congruence between the robot’s decision and situational danger cues, consistent with participants acting as Bayesian decision makers. Several scales, including the RoTA, were more predictive of trust when the robot was making psychology-based decisions, implying that trust reflected individual differences in the mental model of the robot as a teammate. These findings suggest scope for designing training that uncovers and mitigates the individual’s biases toward intelligent machines. Frontiers Media S.A. 2022-03-31 /pmc/articles/PMC9008327/ /pubmed/35432066 http://dx.doi.org/10.3389/fpsyg.2022.601523 Text en Copyright © 2022 Lin, Panganiban, Matthews, Gibbins, Ankeney, See, Bailey and Long. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Psychology Lin, Jinchao Panganiban, April Rose Matthews, Gerald Gibbins, Katey Ankeney, Emily See, Carlie Bailey, Rachel Long, Michael Trust in the Danger Zone: Individual Differences in Confidence in Robot Threat Assessments |
title | Trust in the Danger Zone: Individual Differences in Confidence in Robot Threat Assessments |
title_full | Trust in the Danger Zone: Individual Differences in Confidence in Robot Threat Assessments |
title_fullStr | Trust in the Danger Zone: Individual Differences in Confidence in Robot Threat Assessments |
title_full_unstemmed | Trust in the Danger Zone: Individual Differences in Confidence in Robot Threat Assessments |
title_short | Trust in the Danger Zone: Individual Differences in Confidence in Robot Threat Assessments |
title_sort | trust in the danger zone: individual differences in confidence in robot threat assessments |
topic | Psychology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9008327/ https://www.ncbi.nlm.nih.gov/pubmed/35432066 http://dx.doi.org/10.3389/fpsyg.2022.601523 |
work_keys_str_mv | AT linjinchao trustinthedangerzoneindividualdifferencesinconfidenceinrobotthreatassessments AT panganibanaprilrose trustinthedangerzoneindividualdifferencesinconfidenceinrobotthreatassessments AT matthewsgerald trustinthedangerzoneindividualdifferencesinconfidenceinrobotthreatassessments AT gibbinskatey trustinthedangerzoneindividualdifferencesinconfidenceinrobotthreatassessments AT ankeneyemily trustinthedangerzoneindividualdifferencesinconfidenceinrobotthreatassessments AT seecarlie trustinthedangerzoneindividualdifferencesinconfidenceinrobotthreatassessments AT baileyrachel trustinthedangerzoneindividualdifferencesinconfidenceinrobotthreatassessments AT longmichael trustinthedangerzoneindividualdifferencesinconfidenceinrobotthreatassessments |