Cargando…

Would a robot trust you? Developmental robotics model of trust and theory of mind

Trust is a critical issue in human–robot interactions: as robotic systems gain complexity, it becomes crucial for them to be able to blend into our society by maximizing their acceptability and reliability. Various studies have examined how trust is attributed by people to robots, but fewer have inv...

Descripción completa

Detalles Bibliográficos
Autores principales: Vinanzi, Samuele, Patacchiola, Massimiliano, Chella, Antonio, Cangelosi, Angelo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: The Royal Society 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6452250/
https://www.ncbi.nlm.nih.gov/pubmed/30852993
http://dx.doi.org/10.1098/rstb.2018.0032
_version_ 1783409273803374592
author Vinanzi, Samuele
Patacchiola, Massimiliano
Chella, Antonio
Cangelosi, Angelo
author_facet Vinanzi, Samuele
Patacchiola, Massimiliano
Chella, Antonio
Cangelosi, Angelo
author_sort Vinanzi, Samuele
collection PubMed
description Trust is a critical issue in human–robot interactions: as robotic systems gain complexity, it becomes crucial for them to be able to blend into our society by maximizing their acceptability and reliability. Various studies have examined how trust is attributed by people to robots, but fewer have investigated the opposite scenario, where a robot is the trustor and a human is the trustee. The ability for an agent to evaluate the trustworthiness of its sources of information is particularly useful in joint task situations where people and robots must collaborate to reach shared goals. We propose an artificial cognitive architecture based on the developmental robotics paradigm that can estimate the trustworthiness of its human interactors for the purpose of decision making. This is accomplished using Theory of Mind (ToM), the psychological ability to assign to others beliefs and intentions that can differ from one’s owns. Our work is focused on a humanoid robot cognitive architecture that integrates a probabilistic ToM and trust model supported by an episodic memory system. We tested our architecture on an established developmental psychological experiment, achieving the same results obtained by children, thus demonstrating a new method to enhance the quality of human and robot collaborations. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction’.
format Online
Article
Text
id pubmed-6452250
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher The Royal Society
record_format MEDLINE/PubMed
spelling pubmed-64522502019-04-18 Would a robot trust you? Developmental robotics model of trust and theory of mind Vinanzi, Samuele Patacchiola, Massimiliano Chella, Antonio Cangelosi, Angelo Philos Trans R Soc Lond B Biol Sci Articles Trust is a critical issue in human–robot interactions: as robotic systems gain complexity, it becomes crucial for them to be able to blend into our society by maximizing their acceptability and reliability. Various studies have examined how trust is attributed by people to robots, but fewer have investigated the opposite scenario, where a robot is the trustor and a human is the trustee. The ability for an agent to evaluate the trustworthiness of its sources of information is particularly useful in joint task situations where people and robots must collaborate to reach shared goals. We propose an artificial cognitive architecture based on the developmental robotics paradigm that can estimate the trustworthiness of its human interactors for the purpose of decision making. This is accomplished using Theory of Mind (ToM), the psychological ability to assign to others beliefs and intentions that can differ from one’s owns. Our work is focused on a humanoid robot cognitive architecture that integrates a probabilistic ToM and trust model supported by an episodic memory system. We tested our architecture on an established developmental psychological experiment, achieving the same results obtained by children, thus demonstrating a new method to enhance the quality of human and robot collaborations. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction’. The Royal Society 2019-04-29 2019-03-11 /pmc/articles/PMC6452250/ /pubmed/30852993 http://dx.doi.org/10.1098/rstb.2018.0032 Text en © 2019 The Authors. http://creativecommons.org/licenses/by/4.0/ Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, provided the original author and source are credited.
spellingShingle Articles
Vinanzi, Samuele
Patacchiola, Massimiliano
Chella, Antonio
Cangelosi, Angelo
Would a robot trust you? Developmental robotics model of trust and theory of mind
title Would a robot trust you? Developmental robotics model of trust and theory of mind
title_full Would a robot trust you? Developmental robotics model of trust and theory of mind
title_fullStr Would a robot trust you? Developmental robotics model of trust and theory of mind
title_full_unstemmed Would a robot trust you? Developmental robotics model of trust and theory of mind
title_short Would a robot trust you? Developmental robotics model of trust and theory of mind
title_sort would a robot trust you? developmental robotics model of trust and theory of mind
topic Articles
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6452250/
https://www.ncbi.nlm.nih.gov/pubmed/30852993
http://dx.doi.org/10.1098/rstb.2018.0032
work_keys_str_mv AT vinanzisamuele wouldarobottrustyoudevelopmentalroboticsmodeloftrustandtheoryofmind
AT patacchiolamassimiliano wouldarobottrustyoudevelopmentalroboticsmodeloftrustandtheoryofmind
AT chellaantonio wouldarobottrustyoudevelopmentalroboticsmodeloftrustandtheoryofmind
AT cangelosiangelo wouldarobottrustyoudevelopmentalroboticsmodeloftrustandtheoryofmind