Cargando…
Visual behavior modelling for robotic theory of mind
Behavior modeling is an essential cognitive ability that underlies many aspects of human and animal social behavior (Watson in Psychol Rev 20:158, 1913), and an ability we would like to endow robots. Most studies of machine behavior modelling, however, rely on symbolic or selected parametric sensory...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7801744/ https://www.ncbi.nlm.nih.gov/pubmed/33431917 http://dx.doi.org/10.1038/s41598-020-77918-x |
_version_ | 1783635641685245952 |
---|---|
author | Chen, Boyuan Vondrick, Carl Lipson, Hod |
author_facet | Chen, Boyuan Vondrick, Carl Lipson, Hod |
author_sort | Chen, Boyuan |
collection | PubMed |
description | Behavior modeling is an essential cognitive ability that underlies many aspects of human and animal social behavior (Watson in Psychol Rev 20:158, 1913), and an ability we would like to endow robots. Most studies of machine behavior modelling, however, rely on symbolic or selected parametric sensory inputs and built-in knowledge relevant to a given task. Here, we propose that an observer can model the behavior of an actor through visual processing alone, without any prior symbolic information and assumptions about relevant inputs. To test this hypothesis, we designed a non-verbal non-symbolic robotic experiment in which an observer must visualize future plans of an actor robot, based only on an image depicting the initial scene of the actor robot. We found that an AI-observer is able to visualize the future plans of the actor with 98.5% success across four different activities, even when the activity is not known a-priori. We hypothesize that such visual behavior modeling is an essential cognitive ability that will allow machines to understand and coordinate with surrounding agents, while sidestepping the notorious symbol grounding problem. Through a false-belief test, we suggest that this approach may be a precursor to Theory of Mind, one of the distinguishing hallmarks of primate social cognition. |
format | Online Article Text |
id | pubmed-7801744 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-78017442021-01-13 Visual behavior modelling for robotic theory of mind Chen, Boyuan Vondrick, Carl Lipson, Hod Sci Rep Article Behavior modeling is an essential cognitive ability that underlies many aspects of human and animal social behavior (Watson in Psychol Rev 20:158, 1913), and an ability we would like to endow robots. Most studies of machine behavior modelling, however, rely on symbolic or selected parametric sensory inputs and built-in knowledge relevant to a given task. Here, we propose that an observer can model the behavior of an actor through visual processing alone, without any prior symbolic information and assumptions about relevant inputs. To test this hypothesis, we designed a non-verbal non-symbolic robotic experiment in which an observer must visualize future plans of an actor robot, based only on an image depicting the initial scene of the actor robot. We found that an AI-observer is able to visualize the future plans of the actor with 98.5% success across four different activities, even when the activity is not known a-priori. We hypothesize that such visual behavior modeling is an essential cognitive ability that will allow machines to understand and coordinate with surrounding agents, while sidestepping the notorious symbol grounding problem. Through a false-belief test, we suggest that this approach may be a precursor to Theory of Mind, one of the distinguishing hallmarks of primate social cognition. Nature Publishing Group UK 2021-01-11 /pmc/articles/PMC7801744/ /pubmed/33431917 http://dx.doi.org/10.1038/s41598-020-77918-x Text en © The Author(s) 2021 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. |
spellingShingle | Article Chen, Boyuan Vondrick, Carl Lipson, Hod Visual behavior modelling for robotic theory of mind |
title | Visual behavior modelling for robotic theory of mind |
title_full | Visual behavior modelling for robotic theory of mind |
title_fullStr | Visual behavior modelling for robotic theory of mind |
title_full_unstemmed | Visual behavior modelling for robotic theory of mind |
title_short | Visual behavior modelling for robotic theory of mind |
title_sort | visual behavior modelling for robotic theory of mind |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7801744/ https://www.ncbi.nlm.nih.gov/pubmed/33431917 http://dx.doi.org/10.1038/s41598-020-77918-x |
work_keys_str_mv | AT chenboyuan visualbehaviormodellingforrobotictheoryofmind AT vondrickcarl visualbehaviormodellingforrobotictheoryofmind AT lipsonhod visualbehaviormodellingforrobotictheoryofmind |