Cargando…
Causal Inference for Cross-Modal Action Selection: A Computational Study in a Decision Making Framework
Animals try to make sense of sensory information from multiple modalities by categorizing them into perceptions of individual or multiple external objects or internal concepts. For example, the brain constructs sensory, spatial representations of the locations of visual and auditory stimuli in the v...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2016
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4917558/ https://www.ncbi.nlm.nih.gov/pubmed/27445780 http://dx.doi.org/10.3389/fncom.2016.00062 |
_version_ | 1782438958511161344 |
---|---|
author | Daemi, Mehdi Harris, Laurence R. Crawford, J. Douglas |
author_facet | Daemi, Mehdi Harris, Laurence R. Crawford, J. Douglas |
author_sort | Daemi, Mehdi |
collection | PubMed |
description | Animals try to make sense of sensory information from multiple modalities by categorizing them into perceptions of individual or multiple external objects or internal concepts. For example, the brain constructs sensory, spatial representations of the locations of visual and auditory stimuli in the visual and auditory cortices based on retinal and cochlear stimulations. Currently, it is not known how the brain compares the temporal and spatial features of these sensory representations to decide whether they originate from the same or separate sources in space. Here, we propose a computational model of how the brain might solve such a task. We reduce the visual and auditory information to time-varying, finite-dimensional signals. We introduce controlled, leaky integrators as working memory that retains the sensory information for the limited time-course of task implementation. We propose our model within an evidence-based, decision-making framework, where the alternative plan units are saliency maps of space. A spatiotemporal similarity measure, computed directly from the unimodal signals, is suggested as the criterion to infer common or separate causes. We provide simulations that (1) validate our model against behavioral, experimental results in tasks where the participants were asked to report common or separate causes for cross-modal stimuli presented with arbitrary spatial and temporal disparities. (2) Predict the behavior in novel experiments where stimuli have different combinations of spatial, temporal, and reliability features. (3) Illustrate the dynamics of the proposed internal system. These results confirm our spatiotemporal similarity measure as a viable criterion for causal inference, and our decision-making framework as a viable mechanism for target selection, which may be used by the brain in cross-modal situations. Further, we suggest that a similar approach can be extended to other cognitive problems where working memory is a limiting factor, such as target selection among higher numbers of stimuli and selections among other modality combinations. |
format | Online Article Text |
id | pubmed-4917558 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2016 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-49175582016-07-21 Causal Inference for Cross-Modal Action Selection: A Computational Study in a Decision Making Framework Daemi, Mehdi Harris, Laurence R. Crawford, J. Douglas Front Comput Neurosci Neuroscience Animals try to make sense of sensory information from multiple modalities by categorizing them into perceptions of individual or multiple external objects or internal concepts. For example, the brain constructs sensory, spatial representations of the locations of visual and auditory stimuli in the visual and auditory cortices based on retinal and cochlear stimulations. Currently, it is not known how the brain compares the temporal and spatial features of these sensory representations to decide whether they originate from the same or separate sources in space. Here, we propose a computational model of how the brain might solve such a task. We reduce the visual and auditory information to time-varying, finite-dimensional signals. We introduce controlled, leaky integrators as working memory that retains the sensory information for the limited time-course of task implementation. We propose our model within an evidence-based, decision-making framework, where the alternative plan units are saliency maps of space. A spatiotemporal similarity measure, computed directly from the unimodal signals, is suggested as the criterion to infer common or separate causes. We provide simulations that (1) validate our model against behavioral, experimental results in tasks where the participants were asked to report common or separate causes for cross-modal stimuli presented with arbitrary spatial and temporal disparities. (2) Predict the behavior in novel experiments where stimuli have different combinations of spatial, temporal, and reliability features. (3) Illustrate the dynamics of the proposed internal system. These results confirm our spatiotemporal similarity measure as a viable criterion for causal inference, and our decision-making framework as a viable mechanism for target selection, which may be used by the brain in cross-modal situations. Further, we suggest that a similar approach can be extended to other cognitive problems where working memory is a limiting factor, such as target selection among higher numbers of stimuli and selections among other modality combinations. Frontiers Media S.A. 2016-06-23 /pmc/articles/PMC4917558/ /pubmed/27445780 http://dx.doi.org/10.3389/fncom.2016.00062 Text en Copyright © 2016 Daemi, Harris and Crawford. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Daemi, Mehdi Harris, Laurence R. Crawford, J. Douglas Causal Inference for Cross-Modal Action Selection: A Computational Study in a Decision Making Framework |
title | Causal Inference for Cross-Modal Action Selection: A Computational Study in a Decision Making Framework |
title_full | Causal Inference for Cross-Modal Action Selection: A Computational Study in a Decision Making Framework |
title_fullStr | Causal Inference for Cross-Modal Action Selection: A Computational Study in a Decision Making Framework |
title_full_unstemmed | Causal Inference for Cross-Modal Action Selection: A Computational Study in a Decision Making Framework |
title_short | Causal Inference for Cross-Modal Action Selection: A Computational Study in a Decision Making Framework |
title_sort | causal inference for cross-modal action selection: a computational study in a decision making framework |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4917558/ https://www.ncbi.nlm.nih.gov/pubmed/27445780 http://dx.doi.org/10.3389/fncom.2016.00062 |
work_keys_str_mv | AT daemimehdi causalinferenceforcrossmodalactionselectionacomputationalstudyinadecisionmakingframework AT harrislaurencer causalinferenceforcrossmodalactionselectionacomputationalstudyinadecisionmakingframework AT crawfordjdouglas causalinferenceforcrossmodalactionselectionacomputationalstudyinadecisionmakingframework |