Cargando…

Experiments in Artificial Theory of Mind: From Safety to Story-Telling

Theory of mind is the term given by philosophers and psychologists for the ability to form a predictive model of self and others. In this paper we focus on synthetic models of theory of mind. We contend firstly that such models—especially when tested experimentally—can provide useful insights into c...

Descripción completa

Detalles Bibliográficos
Autor principal: Winfield, Alan F. T.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7806090/
https://www.ncbi.nlm.nih.gov/pubmed/33500954
http://dx.doi.org/10.3389/frobt.2018.00075
_version_ 1783636454218399744
author Winfield, Alan F. T.
author_facet Winfield, Alan F. T.
author_sort Winfield, Alan F. T.
collection PubMed
description Theory of mind is the term given by philosophers and psychologists for the ability to form a predictive model of self and others. In this paper we focus on synthetic models of theory of mind. We contend firstly that such models—especially when tested experimentally—can provide useful insights into cognition, and secondly that artificial theory of mind can provide intelligent robots with powerful new capabilities, in particular social intelligence for human-robot interaction. This paper advances the hypothesis that simulation-based internal models offer a powerful and realisable, theory-driven basis for artificial theory of mind. Proposed as a computational model of the simulation theory of mind, our simulation-based internal model equips a robot with an internal model of itself and its environment, including other dynamic actors, which can test (i.e., simulate) the robot's next possible actions and hence anticipate the likely consequences of those actions both for itself and others. Although it falls far short of a full artificial theory of mind, our model does allow us to test several interesting scenarios: in some of these a robot equipped with the internal model interacts with other robots without an internal model, but acting as proxy humans; in others two robots each with a simulation-based internal model interact with each other. We outline a series of experiments which each demonstrate some aspect of artificial theory of mind.
format Online
Article
Text
id pubmed-7806090
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-78060902021-01-25 Experiments in Artificial Theory of Mind: From Safety to Story-Telling Winfield, Alan F. T. Front Robot AI Robotics and AI Theory of mind is the term given by philosophers and psychologists for the ability to form a predictive model of self and others. In this paper we focus on synthetic models of theory of mind. We contend firstly that such models—especially when tested experimentally—can provide useful insights into cognition, and secondly that artificial theory of mind can provide intelligent robots with powerful new capabilities, in particular social intelligence for human-robot interaction. This paper advances the hypothesis that simulation-based internal models offer a powerful and realisable, theory-driven basis for artificial theory of mind. Proposed as a computational model of the simulation theory of mind, our simulation-based internal model equips a robot with an internal model of itself and its environment, including other dynamic actors, which can test (i.e., simulate) the robot's next possible actions and hence anticipate the likely consequences of those actions both for itself and others. Although it falls far short of a full artificial theory of mind, our model does allow us to test several interesting scenarios: in some of these a robot equipped with the internal model interacts with other robots without an internal model, but acting as proxy humans; in others two robots each with a simulation-based internal model interact with each other. We outline a series of experiments which each demonstrate some aspect of artificial theory of mind. Frontiers Media S.A. 2018-06-26 /pmc/articles/PMC7806090/ /pubmed/33500954 http://dx.doi.org/10.3389/frobt.2018.00075 Text en Copyright © 2018 Winfield. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Robotics and AI
Winfield, Alan F. T.
Experiments in Artificial Theory of Mind: From Safety to Story-Telling
title Experiments in Artificial Theory of Mind: From Safety to Story-Telling
title_full Experiments in Artificial Theory of Mind: From Safety to Story-Telling
title_fullStr Experiments in Artificial Theory of Mind: From Safety to Story-Telling
title_full_unstemmed Experiments in Artificial Theory of Mind: From Safety to Story-Telling
title_short Experiments in Artificial Theory of Mind: From Safety to Story-Telling
title_sort experiments in artificial theory of mind: from safety to story-telling
topic Robotics and AI
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7806090/
https://www.ncbi.nlm.nih.gov/pubmed/33500954
http://dx.doi.org/10.3389/frobt.2018.00075
work_keys_str_mv AT winfieldalanft experimentsinartificialtheoryofmindfromsafetytostorytelling