Cargando…
A multimodal dataset of spontaneous speech and movement production on object affordances
In the longstanding effort of defining object affordances, a number of resources have been developed on objects and associated knowledge. These resources, however, have limited potential for modeling and generalization mainly due to the restricted, stimulus-bound data collection methodologies adopte...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group
2016
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4718047/ https://www.ncbi.nlm.nih.gov/pubmed/26784391 http://dx.doi.org/10.1038/sdata.2015.78 |
_version_ | 1782410729913057280 |
---|---|
author | Vatakis, Argiro Pastra, Katerina |
author_facet | Vatakis, Argiro Pastra, Katerina |
author_sort | Vatakis, Argiro |
collection | PubMed |
description | In the longstanding effort of defining object affordances, a number of resources have been developed on objects and associated knowledge. These resources, however, have limited potential for modeling and generalization mainly due to the restricted, stimulus-bound data collection methodologies adopted. To-date, therefore, there exists no resource that truly captures object affordances in a direct, multimodal, and naturalistic way. Here, we present the first such resource of ‘thinking aloud’, spontaneously-generated verbal and motoric data on object affordances. This resource was developed from the reports of 124 participants divided into three behavioural experiments with visuo-tactile stimulation, which were captured audiovisually from two camera-views (frontal/profile). This methodology allowed the acquisition of approximately 95 hours of video, audio, and text data covering: object-feature-action data (e.g., perceptual features, namings, functions), Exploratory Acts (haptic manipulation for feature acquisition/verification), gestures and demonstrations for object/feature/action description, and reasoning patterns (e.g., justifications, analogies) for attributing a given characterization. The wealth and content of the data make this corpus a one-of-a-kind resource for the study and modeling of object affordances. |
format | Online Article Text |
id | pubmed-4718047 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2016 |
publisher | Nature Publishing Group |
record_format | MEDLINE/PubMed |
spelling | pubmed-47180472016-02-12 A multimodal dataset of spontaneous speech and movement production on object affordances Vatakis, Argiro Pastra, Katerina Sci Data Data Descriptor In the longstanding effort of defining object affordances, a number of resources have been developed on objects and associated knowledge. These resources, however, have limited potential for modeling and generalization mainly due to the restricted, stimulus-bound data collection methodologies adopted. To-date, therefore, there exists no resource that truly captures object affordances in a direct, multimodal, and naturalistic way. Here, we present the first such resource of ‘thinking aloud’, spontaneously-generated verbal and motoric data on object affordances. This resource was developed from the reports of 124 participants divided into three behavioural experiments with visuo-tactile stimulation, which were captured audiovisually from two camera-views (frontal/profile). This methodology allowed the acquisition of approximately 95 hours of video, audio, and text data covering: object-feature-action data (e.g., perceptual features, namings, functions), Exploratory Acts (haptic manipulation for feature acquisition/verification), gestures and demonstrations for object/feature/action description, and reasoning patterns (e.g., justifications, analogies) for attributing a given characterization. The wealth and content of the data make this corpus a one-of-a-kind resource for the study and modeling of object affordances. Nature Publishing Group 2016-01-19 /pmc/articles/PMC4718047/ /pubmed/26784391 http://dx.doi.org/10.1038/sdata.2015.78 Text en Copyright © 2016, Macmillan Publishers Limited http://creativecommons.org/licenses/by-nc/4.0/ This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc/4.0/ Metadata associated with this Data Descriptor is available at http://www.nature.com/sdata/ and is released under the CC0 waiver to maximize reuse. |
spellingShingle | Data Descriptor Vatakis, Argiro Pastra, Katerina A multimodal dataset of spontaneous speech and movement production on object affordances |
title | A multimodal dataset of spontaneous speech and movement production on object affordances |
title_full | A multimodal dataset of spontaneous speech and movement production on object affordances |
title_fullStr | A multimodal dataset of spontaneous speech and movement production on object affordances |
title_full_unstemmed | A multimodal dataset of spontaneous speech and movement production on object affordances |
title_short | A multimodal dataset of spontaneous speech and movement production on object affordances |
title_sort | multimodal dataset of spontaneous speech and movement production on object affordances |
topic | Data Descriptor |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4718047/ https://www.ncbi.nlm.nih.gov/pubmed/26784391 http://dx.doi.org/10.1038/sdata.2015.78 |
work_keys_str_mv | AT vatakisargiro amultimodaldatasetofspontaneousspeechandmovementproductiononobjectaffordances AT pastrakaterina amultimodaldatasetofspontaneousspeechandmovementproductiononobjectaffordances AT vatakisargiro multimodaldatasetofspontaneousspeechandmovementproductiononobjectaffordances AT pastrakaterina multimodaldatasetofspontaneousspeechandmovementproductiononobjectaffordances |