Cargando…

A deep convolutional visual encoding model of neuronal responses in the LGN

The Lateral Geniculate Nucleus (LGN) represents one of the major processing sites along the visual pathway. Despite its crucial role in processing visual information and its utility as one target for recently developed visual prostheses, it is much less studied compared to the retina and the visual...

Descripción completa

Detalles Bibliográficos
Autores principales: Mounier, Eslam, Abdullah, Bassem, Mahdi, Hani, Eldawlatly, Seif
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Berlin Heidelberg 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8206408/
https://www.ncbi.nlm.nih.gov/pubmed/34129111
http://dx.doi.org/10.1186/s40708-021-00132-6
_version_ 1783708626082332672
author Mounier, Eslam
Abdullah, Bassem
Mahdi, Hani
Eldawlatly, Seif
author_facet Mounier, Eslam
Abdullah, Bassem
Mahdi, Hani
Eldawlatly, Seif
author_sort Mounier, Eslam
collection PubMed
description The Lateral Geniculate Nucleus (LGN) represents one of the major processing sites along the visual pathway. Despite its crucial role in processing visual information and its utility as one target for recently developed visual prostheses, it is much less studied compared to the retina and the visual cortex. In this paper, we introduce a deep learning encoder to predict LGN neuronal firing in response to different visual stimulation patterns. The encoder comprises a deep Convolutional Neural Network (CNN) that incorporates visual stimulus spatiotemporal representation in addition to LGN neuronal firing history to predict the response of LGN neurons. Extracellular activity was recorded in vivo using multi-electrode arrays from single units in the LGN in 12 anesthetized rats with a total neuronal population of 150 units. Neural activity was recorded in response to single-pixel, checkerboard and geometrical shapes visual stimulation patterns. Extracted firing rates and the corresponding stimulation patterns were used to train the model. The performance of the model was assessed using different testing data sets and different firing rate windows. An overall mean correlation coefficient between the actual and the predicted firing rates of 0.57 and 0.7 was achieved for the 10 ms and the 50 ms firing rate windows, respectively. Results demonstrate that the model is robust to variability in the spatiotemporal properties of the recorded neurons outperforming other examined models including the state-of-the-art Generalized Linear Model (GLM). The results indicate the potential of deep convolutional neural networks as viable models of LGN firing. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s40708-021-00132-6.
format Online
Article
Text
id pubmed-8206408
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Springer Berlin Heidelberg
record_format MEDLINE/PubMed
spelling pubmed-82064082021-07-01 A deep convolutional visual encoding model of neuronal responses in the LGN Mounier, Eslam Abdullah, Bassem Mahdi, Hani Eldawlatly, Seif Brain Inform Research The Lateral Geniculate Nucleus (LGN) represents one of the major processing sites along the visual pathway. Despite its crucial role in processing visual information and its utility as one target for recently developed visual prostheses, it is much less studied compared to the retina and the visual cortex. In this paper, we introduce a deep learning encoder to predict LGN neuronal firing in response to different visual stimulation patterns. The encoder comprises a deep Convolutional Neural Network (CNN) that incorporates visual stimulus spatiotemporal representation in addition to LGN neuronal firing history to predict the response of LGN neurons. Extracellular activity was recorded in vivo using multi-electrode arrays from single units in the LGN in 12 anesthetized rats with a total neuronal population of 150 units. Neural activity was recorded in response to single-pixel, checkerboard and geometrical shapes visual stimulation patterns. Extracted firing rates and the corresponding stimulation patterns were used to train the model. The performance of the model was assessed using different testing data sets and different firing rate windows. An overall mean correlation coefficient between the actual and the predicted firing rates of 0.57 and 0.7 was achieved for the 10 ms and the 50 ms firing rate windows, respectively. Results demonstrate that the model is robust to variability in the spatiotemporal properties of the recorded neurons outperforming other examined models including the state-of-the-art Generalized Linear Model (GLM). The results indicate the potential of deep convolutional neural networks as viable models of LGN firing. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s40708-021-00132-6. Springer Berlin Heidelberg 2021-06-15 /pmc/articles/PMC8206408/ /pubmed/34129111 http://dx.doi.org/10.1186/s40708-021-00132-6 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Research
Mounier, Eslam
Abdullah, Bassem
Mahdi, Hani
Eldawlatly, Seif
A deep convolutional visual encoding model of neuronal responses in the LGN
title A deep convolutional visual encoding model of neuronal responses in the LGN
title_full A deep convolutional visual encoding model of neuronal responses in the LGN
title_fullStr A deep convolutional visual encoding model of neuronal responses in the LGN
title_full_unstemmed A deep convolutional visual encoding model of neuronal responses in the LGN
title_short A deep convolutional visual encoding model of neuronal responses in the LGN
title_sort deep convolutional visual encoding model of neuronal responses in the lgn
topic Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8206408/
https://www.ncbi.nlm.nih.gov/pubmed/34129111
http://dx.doi.org/10.1186/s40708-021-00132-6
work_keys_str_mv AT mouniereslam adeepconvolutionalvisualencodingmodelofneuronalresponsesinthelgn
AT abdullahbassem adeepconvolutionalvisualencodingmodelofneuronalresponsesinthelgn
AT mahdihani adeepconvolutionalvisualencodingmodelofneuronalresponsesinthelgn
AT eldawlatlyseif adeepconvolutionalvisualencodingmodelofneuronalresponsesinthelgn
AT mouniereslam deepconvolutionalvisualencodingmodelofneuronalresponsesinthelgn
AT abdullahbassem deepconvolutionalvisualencodingmodelofneuronalresponsesinthelgn
AT mahdihani deepconvolutionalvisualencodingmodelofneuronalresponsesinthelgn
AT eldawlatlyseif deepconvolutionalvisualencodingmodelofneuronalresponsesinthelgn