Cargando…

Scaffolding the human partner by contrastive guidance in an explanatory human-robot dialogue

Explanation has been identified as an important capability for AI-based systems, but research on systematic strategies for achieving understanding in interaction with such systems is still sparse. Negation is a linguistic strategy that is often used in explanations. It creates a contrast space betwe...

Descripción completa

Detalles Bibliográficos
Autores principales: Groß, André, Singh, Amit, Banh, Ngoc Chi, Richter, Birte, Scharlau, Ingrid, Rohlfing, Katharina J., Wrede, Britta
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10642948/
https://www.ncbi.nlm.nih.gov/pubmed/37965633
http://dx.doi.org/10.3389/frobt.2023.1236184
_version_ 1785147051413078016
author Groß, André
Singh, Amit
Banh, Ngoc Chi
Richter, Birte
Scharlau, Ingrid
Rohlfing, Katharina J.
Wrede, Britta
author_facet Groß, André
Singh, Amit
Banh, Ngoc Chi
Richter, Birte
Scharlau, Ingrid
Rohlfing, Katharina J.
Wrede, Britta
author_sort Groß, André
collection PubMed
description Explanation has been identified as an important capability for AI-based systems, but research on systematic strategies for achieving understanding in interaction with such systems is still sparse. Negation is a linguistic strategy that is often used in explanations. It creates a contrast space between the affirmed and the negated item that enriches explaining processes with additional contextual information. While negation in human speech has been shown to lead to higher processing costs and worse task performance in terms of recall or action execution when used in isolation, it can decrease processing costs when used in context. So far, it has not been considered as a guiding strategy for explanations in human-robot interaction. We conducted an empirical study to investigate the use of negation as a guiding strategy in explanatory human-robot dialogue, in which a virtual robot explains tasks and possible actions to a human explainee to solve them in terms of gestures on a touchscreen. Our results show that negation vs. affirmation 1) increases processing costs measured as reaction time and 2) increases several aspects of task performance. While there was no significant effect of negation on the number of initially correctly executed gestures, we found a significantly lower number of attempts—measured as breaks in the finger movement data before the correct gesture was carried out—when being instructed through a negation. We further found that the gestures significantly resembled the presented prototype gesture more following an instruction with a negation as opposed to an affirmation. Also, the participants rated the benefit of contrastive vs. affirmative explanations significantly higher. Repeating the instructions decreased the effects of negation, yielding similar processing costs and task performance measures for negation and affirmation after several iterations. We discuss our results with respect to possible effects of negation on linguistic processing of explanations and limitations of our study.
format Online
Article
Text
id pubmed-10642948
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-106429482023-11-14 Scaffolding the human partner by contrastive guidance in an explanatory human-robot dialogue Groß, André Singh, Amit Banh, Ngoc Chi Richter, Birte Scharlau, Ingrid Rohlfing, Katharina J. Wrede, Britta Front Robot AI Robotics and AI Explanation has been identified as an important capability for AI-based systems, but research on systematic strategies for achieving understanding in interaction with such systems is still sparse. Negation is a linguistic strategy that is often used in explanations. It creates a contrast space between the affirmed and the negated item that enriches explaining processes with additional contextual information. While negation in human speech has been shown to lead to higher processing costs and worse task performance in terms of recall or action execution when used in isolation, it can decrease processing costs when used in context. So far, it has not been considered as a guiding strategy for explanations in human-robot interaction. We conducted an empirical study to investigate the use of negation as a guiding strategy in explanatory human-robot dialogue, in which a virtual robot explains tasks and possible actions to a human explainee to solve them in terms of gestures on a touchscreen. Our results show that negation vs. affirmation 1) increases processing costs measured as reaction time and 2) increases several aspects of task performance. While there was no significant effect of negation on the number of initially correctly executed gestures, we found a significantly lower number of attempts—measured as breaks in the finger movement data before the correct gesture was carried out—when being instructed through a negation. We further found that the gestures significantly resembled the presented prototype gesture more following an instruction with a negation as opposed to an affirmation. Also, the participants rated the benefit of contrastive vs. affirmative explanations significantly higher. Repeating the instructions decreased the effects of negation, yielding similar processing costs and task performance measures for negation and affirmation after several iterations. We discuss our results with respect to possible effects of negation on linguistic processing of explanations and limitations of our study. Frontiers Media S.A. 2023-10-30 /pmc/articles/PMC10642948/ /pubmed/37965633 http://dx.doi.org/10.3389/frobt.2023.1236184 Text en Copyright © 2023 Groß, Singh, Banh, Richter, Scharlau, Rohlfing and Wrede. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Robotics and AI
Groß, André
Singh, Amit
Banh, Ngoc Chi
Richter, Birte
Scharlau, Ingrid
Rohlfing, Katharina J.
Wrede, Britta
Scaffolding the human partner by contrastive guidance in an explanatory human-robot dialogue
title Scaffolding the human partner by contrastive guidance in an explanatory human-robot dialogue
title_full Scaffolding the human partner by contrastive guidance in an explanatory human-robot dialogue
title_fullStr Scaffolding the human partner by contrastive guidance in an explanatory human-robot dialogue
title_full_unstemmed Scaffolding the human partner by contrastive guidance in an explanatory human-robot dialogue
title_short Scaffolding the human partner by contrastive guidance in an explanatory human-robot dialogue
title_sort scaffolding the human partner by contrastive guidance in an explanatory human-robot dialogue
topic Robotics and AI
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10642948/
https://www.ncbi.nlm.nih.gov/pubmed/37965633
http://dx.doi.org/10.3389/frobt.2023.1236184
work_keys_str_mv AT großandre scaffoldingthehumanpartnerbycontrastiveguidanceinanexplanatoryhumanrobotdialogue
AT singhamit scaffoldingthehumanpartnerbycontrastiveguidanceinanexplanatoryhumanrobotdialogue
AT banhngocchi scaffoldingthehumanpartnerbycontrastiveguidanceinanexplanatoryhumanrobotdialogue
AT richterbirte scaffoldingthehumanpartnerbycontrastiveguidanceinanexplanatoryhumanrobotdialogue
AT scharlauingrid scaffoldingthehumanpartnerbycontrastiveguidanceinanexplanatoryhumanrobotdialogue
AT rohlfingkatharinaj scaffoldingthehumanpartnerbycontrastiveguidanceinanexplanatoryhumanrobotdialogue
AT wredebritta scaffoldingthehumanpartnerbycontrastiveguidanceinanexplanatoryhumanrobotdialogue