Cargando…

Reliable and transparent in-vehicle agents lead to higher behavioral trust in conditionally automated driving systems

Trust is critical for human-automation collaboration, especially under safety-critical tasks such as driving. Providing explainable information on how the automation system reaches decisions and predictions can improve system transparency, which is believed to further facilitate driver trust and use...

Descripción completa

Detalles Bibliográficos
Autores principales: Taylor, Skye, Wang, Manhua, Jeon, Myounghoon
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10232983/
https://www.ncbi.nlm.nih.gov/pubmed/37275735
http://dx.doi.org/10.3389/fpsyg.2023.1121622
_version_ 1785052126757519360
author Taylor, Skye
Wang, Manhua
Jeon, Myounghoon
author_facet Taylor, Skye
Wang, Manhua
Jeon, Myounghoon
author_sort Taylor, Skye
collection PubMed
description Trust is critical for human-automation collaboration, especially under safety-critical tasks such as driving. Providing explainable information on how the automation system reaches decisions and predictions can improve system transparency, which is believed to further facilitate driver trust and user evaluation of the automated vehicles. However, what the optimal level of transparency is and how the system communicates it to calibrate drivers’ trust and improve their driving performance remain uncertain. Such uncertainty becomes even more unpredictable given that the system reliability remains dynamic due to current technological limitations. To address this issue in conditionally automated vehicles, a total of 30 participants were recruited in a driving simulator study and assigned to either a low or a high system reliability condition. They experienced two driving scenarios accompanied by two types of in-vehicle agents delivering information with different transparency types: “what”-then-wait (on-demand) and “what + why” (proactive). The on-demand agent provided some information about the upcoming event and delivered more information if prompted by the driver, whereas the proactive agent provided all information at once. Results indicated that the on-demand agent was more habitable, or naturalistic, to drivers and was perceived with faster system response speed compared to the proactive agent. Drivers under the high-reliability condition complied with the takeover request (TOR) more (if the agent was on-demand) and had shorter takeover times (in both agent conditions) compared to those under the low-reliability condition. These findings inspire how the automation system can deliver information to improve system transparency while adapting to system reliability and user evaluation, which further contributes to driver trust calibration and performance correction in future automated vehicles.
format Online
Article
Text
id pubmed-10232983
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-102329832023-06-02 Reliable and transparent in-vehicle agents lead to higher behavioral trust in conditionally automated driving systems Taylor, Skye Wang, Manhua Jeon, Myounghoon Front Psychol Psychology Trust is critical for human-automation collaboration, especially under safety-critical tasks such as driving. Providing explainable information on how the automation system reaches decisions and predictions can improve system transparency, which is believed to further facilitate driver trust and user evaluation of the automated vehicles. However, what the optimal level of transparency is and how the system communicates it to calibrate drivers’ trust and improve their driving performance remain uncertain. Such uncertainty becomes even more unpredictable given that the system reliability remains dynamic due to current technological limitations. To address this issue in conditionally automated vehicles, a total of 30 participants were recruited in a driving simulator study and assigned to either a low or a high system reliability condition. They experienced two driving scenarios accompanied by two types of in-vehicle agents delivering information with different transparency types: “what”-then-wait (on-demand) and “what + why” (proactive). The on-demand agent provided some information about the upcoming event and delivered more information if prompted by the driver, whereas the proactive agent provided all information at once. Results indicated that the on-demand agent was more habitable, or naturalistic, to drivers and was perceived with faster system response speed compared to the proactive agent. Drivers under the high-reliability condition complied with the takeover request (TOR) more (if the agent was on-demand) and had shorter takeover times (in both agent conditions) compared to those under the low-reliability condition. These findings inspire how the automation system can deliver information to improve system transparency while adapting to system reliability and user evaluation, which further contributes to driver trust calibration and performance correction in future automated vehicles. Frontiers Media S.A. 2023-05-18 /pmc/articles/PMC10232983/ /pubmed/37275735 http://dx.doi.org/10.3389/fpsyg.2023.1121622 Text en Copyright © 2023 Taylor, Wang and Jeon. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Psychology
Taylor, Skye
Wang, Manhua
Jeon, Myounghoon
Reliable and transparent in-vehicle agents lead to higher behavioral trust in conditionally automated driving systems
title Reliable and transparent in-vehicle agents lead to higher behavioral trust in conditionally automated driving systems
title_full Reliable and transparent in-vehicle agents lead to higher behavioral trust in conditionally automated driving systems
title_fullStr Reliable and transparent in-vehicle agents lead to higher behavioral trust in conditionally automated driving systems
title_full_unstemmed Reliable and transparent in-vehicle agents lead to higher behavioral trust in conditionally automated driving systems
title_short Reliable and transparent in-vehicle agents lead to higher behavioral trust in conditionally automated driving systems
title_sort reliable and transparent in-vehicle agents lead to higher behavioral trust in conditionally automated driving systems
topic Psychology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10232983/
https://www.ncbi.nlm.nih.gov/pubmed/37275735
http://dx.doi.org/10.3389/fpsyg.2023.1121622
work_keys_str_mv AT taylorskye reliableandtransparentinvehicleagentsleadtohigherbehavioraltrustinconditionallyautomateddrivingsystems
AT wangmanhua reliableandtransparentinvehicleagentsleadtohigherbehavioraltrustinconditionallyautomateddrivingsystems
AT jeonmyounghoon reliableandtransparentinvehicleagentsleadtohigherbehavioraltrustinconditionallyautomateddrivingsystems