Cargando…

The FACTS model of speech motor control: Fusing state estimation and task-based control

We present a new computational model of speech motor control: the Feedback-Aware Control of Tasks in Speech or FACTS model. FACTS employs a hierarchical state feedback control architecture to control simulated vocal tract and produce intelligible speech. The model includes higher-level control of sp...

Descripción completa

Detalles Bibliográficos
Autores principales: Parrell, Benjamin, Ramanarayanan, Vikram, Nagarajan, Srikantan, Houde, John
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6743785/
https://www.ncbi.nlm.nih.gov/pubmed/31479444
http://dx.doi.org/10.1371/journal.pcbi.1007321
_version_ 1783451328409763840
author Parrell, Benjamin
Ramanarayanan, Vikram
Nagarajan, Srikantan
Houde, John
author_facet Parrell, Benjamin
Ramanarayanan, Vikram
Nagarajan, Srikantan
Houde, John
author_sort Parrell, Benjamin
collection PubMed
description We present a new computational model of speech motor control: the Feedback-Aware Control of Tasks in Speech or FACTS model. FACTS employs a hierarchical state feedback control architecture to control simulated vocal tract and produce intelligible speech. The model includes higher-level control of speech tasks and lower-level control of speech articulators. The task controller is modeled as a dynamical system governing the creation of desired constrictions in the vocal tract, after Task Dynamics. Both the task and articulatory controllers rely on an internal estimate of the current state of the vocal tract to generate motor commands. This estimate is derived, based on efference copy of applied controls, from a forward model that predicts both the next vocal tract state as well as expected auditory and somatosensory feedback. A comparison between predicted feedback and actual feedback is then used to update the internal state prediction. FACTS is able to qualitatively replicate many characteristics of the human speech system: the model is robust to noise in both the sensory and motor pathways, is relatively unaffected by a loss of auditory feedback but is more significantly impacted by the loss of somatosensory feedback, and responds appropriately to externally-imposed alterations of auditory and somatosensory feedback. The model also replicates previously hypothesized trade-offs between reliance on auditory and somatosensory feedback and shows for the first time how this relationship may be mediated by acuity in each sensory domain. These results have important implications for our understanding of the speech motor control system in humans.
format Online
Article
Text
id pubmed-6743785
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-67437852019-09-20 The FACTS model of speech motor control: Fusing state estimation and task-based control Parrell, Benjamin Ramanarayanan, Vikram Nagarajan, Srikantan Houde, John PLoS Comput Biol Research Article We present a new computational model of speech motor control: the Feedback-Aware Control of Tasks in Speech or FACTS model. FACTS employs a hierarchical state feedback control architecture to control simulated vocal tract and produce intelligible speech. The model includes higher-level control of speech tasks and lower-level control of speech articulators. The task controller is modeled as a dynamical system governing the creation of desired constrictions in the vocal tract, after Task Dynamics. Both the task and articulatory controllers rely on an internal estimate of the current state of the vocal tract to generate motor commands. This estimate is derived, based on efference copy of applied controls, from a forward model that predicts both the next vocal tract state as well as expected auditory and somatosensory feedback. A comparison between predicted feedback and actual feedback is then used to update the internal state prediction. FACTS is able to qualitatively replicate many characteristics of the human speech system: the model is robust to noise in both the sensory and motor pathways, is relatively unaffected by a loss of auditory feedback but is more significantly impacted by the loss of somatosensory feedback, and responds appropriately to externally-imposed alterations of auditory and somatosensory feedback. The model also replicates previously hypothesized trade-offs between reliance on auditory and somatosensory feedback and shows for the first time how this relationship may be mediated by acuity in each sensory domain. These results have important implications for our understanding of the speech motor control system in humans. Public Library of Science 2019-09-03 /pmc/articles/PMC6743785/ /pubmed/31479444 http://dx.doi.org/10.1371/journal.pcbi.1007321 Text en © 2019 Parrell et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Parrell, Benjamin
Ramanarayanan, Vikram
Nagarajan, Srikantan
Houde, John
The FACTS model of speech motor control: Fusing state estimation and task-based control
title The FACTS model of speech motor control: Fusing state estimation and task-based control
title_full The FACTS model of speech motor control: Fusing state estimation and task-based control
title_fullStr The FACTS model of speech motor control: Fusing state estimation and task-based control
title_full_unstemmed The FACTS model of speech motor control: Fusing state estimation and task-based control
title_short The FACTS model of speech motor control: Fusing state estimation and task-based control
title_sort facts model of speech motor control: fusing state estimation and task-based control
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6743785/
https://www.ncbi.nlm.nih.gov/pubmed/31479444
http://dx.doi.org/10.1371/journal.pcbi.1007321
work_keys_str_mv AT parrellbenjamin thefactsmodelofspeechmotorcontrolfusingstateestimationandtaskbasedcontrol
AT ramanarayananvikram thefactsmodelofspeechmotorcontrolfusingstateestimationandtaskbasedcontrol
AT nagarajansrikantan thefactsmodelofspeechmotorcontrolfusingstateestimationandtaskbasedcontrol
AT houdejohn thefactsmodelofspeechmotorcontrolfusingstateestimationandtaskbasedcontrol
AT parrellbenjamin factsmodelofspeechmotorcontrolfusingstateestimationandtaskbasedcontrol
AT ramanarayananvikram factsmodelofspeechmotorcontrolfusingstateestimationandtaskbasedcontrol
AT nagarajansrikantan factsmodelofspeechmotorcontrolfusingstateestimationandtaskbasedcontrol
AT houdejohn factsmodelofspeechmotorcontrolfusingstateestimationandtaskbasedcontrol