Cargando…

Real-Time Decision Fusion for Multimodal Neural Prosthetic Devices

BACKGROUND: The field of neural prosthetics aims to develop prosthetic limbs with a brain-computer interface (BCI) through which neural activity is decoded into movements. A natural extension of current research is the incorporation of neural activity from multiple modalities to more accurately esti...

Descripción completa

Detalles Bibliográficos
Autores principales: White, James Robert, Levy, Todd, Bishop, William, Beaty, James D.
Formato: Texto
Lenguaje:English
Publicado: Public Library of Science 2010
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2830464/
https://www.ncbi.nlm.nih.gov/pubmed/20209151
http://dx.doi.org/10.1371/journal.pone.0009493
_version_ 1782178164510818304
author White, James Robert
Levy, Todd
Bishop, William
Beaty, James D.
author_facet White, James Robert
Levy, Todd
Bishop, William
Beaty, James D.
author_sort White, James Robert
collection PubMed
description BACKGROUND: The field of neural prosthetics aims to develop prosthetic limbs with a brain-computer interface (BCI) through which neural activity is decoded into movements. A natural extension of current research is the incorporation of neural activity from multiple modalities to more accurately estimate the user's intent. The challenge remains how to appropriately combine this information in real-time for a neural prosthetic device. METHODOLOGY/PRINCIPAL FINDINGS: Here we propose a framework based on decision fusion, i.e., fusing predictions from several single-modality decoders to produce a more accurate device state estimate. We examine two algorithms for continuous variable decision fusion: the Kalman filter and artificial neural networks (ANNs). Using simulated cortical neural spike signals, we implemented several successful individual neural decoding algorithms, and tested the capabilities of each fusion method in the context of decoding 2-dimensional endpoint trajectories of a neural prosthetic arm. Extensively testing these methods on random trajectories, we find that on average both the Kalman filter and ANNs successfully fuse the individual decoder estimates to produce more accurate predictions. CONCLUSIONS: Our results reveal that a fusion-based approach has the potential to improve prediction accuracy over individual decoders of varying quality, and we hope that this work will encourage multimodal neural prosthetics experiments in the future.
format Text
id pubmed-2830464
institution National Center for Biotechnology Information
language English
publishDate 2010
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-28304642010-03-05 Real-Time Decision Fusion for Multimodal Neural Prosthetic Devices White, James Robert Levy, Todd Bishop, William Beaty, James D. PLoS One Research Article BACKGROUND: The field of neural prosthetics aims to develop prosthetic limbs with a brain-computer interface (BCI) through which neural activity is decoded into movements. A natural extension of current research is the incorporation of neural activity from multiple modalities to more accurately estimate the user's intent. The challenge remains how to appropriately combine this information in real-time for a neural prosthetic device. METHODOLOGY/PRINCIPAL FINDINGS: Here we propose a framework based on decision fusion, i.e., fusing predictions from several single-modality decoders to produce a more accurate device state estimate. We examine two algorithms for continuous variable decision fusion: the Kalman filter and artificial neural networks (ANNs). Using simulated cortical neural spike signals, we implemented several successful individual neural decoding algorithms, and tested the capabilities of each fusion method in the context of decoding 2-dimensional endpoint trajectories of a neural prosthetic arm. Extensively testing these methods on random trajectories, we find that on average both the Kalman filter and ANNs successfully fuse the individual decoder estimates to produce more accurate predictions. CONCLUSIONS: Our results reveal that a fusion-based approach has the potential to improve prediction accuracy over individual decoders of varying quality, and we hope that this work will encourage multimodal neural prosthetics experiments in the future. Public Library of Science 2010-03-02 /pmc/articles/PMC2830464/ /pubmed/20209151 http://dx.doi.org/10.1371/journal.pone.0009493 Text en White et al. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited.
spellingShingle Research Article
White, James Robert
Levy, Todd
Bishop, William
Beaty, James D.
Real-Time Decision Fusion for Multimodal Neural Prosthetic Devices
title Real-Time Decision Fusion for Multimodal Neural Prosthetic Devices
title_full Real-Time Decision Fusion for Multimodal Neural Prosthetic Devices
title_fullStr Real-Time Decision Fusion for Multimodal Neural Prosthetic Devices
title_full_unstemmed Real-Time Decision Fusion for Multimodal Neural Prosthetic Devices
title_short Real-Time Decision Fusion for Multimodal Neural Prosthetic Devices
title_sort real-time decision fusion for multimodal neural prosthetic devices
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2830464/
https://www.ncbi.nlm.nih.gov/pubmed/20209151
http://dx.doi.org/10.1371/journal.pone.0009493
work_keys_str_mv AT whitejamesrobert realtimedecisionfusionformultimodalneuralprostheticdevices
AT levytodd realtimedecisionfusionformultimodalneuralprostheticdevices
AT bishopwilliam realtimedecisionfusionformultimodalneuralprostheticdevices
AT beatyjamesd realtimedecisionfusionformultimodalneuralprostheticdevices