Cargando…

Beyond ℓ(1) sparse coding in V1

Growing evidence indicates that only a sparse subset from a pool of sensory neurons is active for the encoding of visual stimuli at any instant in time. Traditionally, to replicate such biological sparsity, generative models have been using the ℓ(1) norm as a penalty due to its convexity, which make...

Descripción completa

Detalles Bibliográficos
Autores principales: Rentzeperis, Ilias, Calatroni, Luca, Perrinet, Laurent U., Prandi, Dario
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10516432/
https://www.ncbi.nlm.nih.gov/pubmed/37699052
http://dx.doi.org/10.1371/journal.pcbi.1011459
_version_ 1785109126955663360
author Rentzeperis, Ilias
Calatroni, Luca
Perrinet, Laurent U.
Prandi, Dario
author_facet Rentzeperis, Ilias
Calatroni, Luca
Perrinet, Laurent U.
Prandi, Dario
author_sort Rentzeperis, Ilias
collection PubMed
description Growing evidence indicates that only a sparse subset from a pool of sensory neurons is active for the encoding of visual stimuli at any instant in time. Traditionally, to replicate such biological sparsity, generative models have been using the ℓ(1) norm as a penalty due to its convexity, which makes it amenable to fast and simple algorithmic solvers. In this work, we use biological vision as a test-bed and show that the soft thresholding operation associated to the use of the ℓ(1) norm is highly suboptimal compared to other functions suited to approximating ℓ(p) with 0 ≤ p < 1 (including recently proposed continuous exact relaxations), in terms of performance. We show that ℓ(1) sparsity employs a pool with more neurons, i.e. has a higher degree of overcompleteness, in order to maintain the same reconstruction error as the other methods considered. More specifically, at the same sparsity level, the thresholding algorithm using the ℓ(1) norm as a penalty requires a dictionary of ten times more units compared to the proposed approach, where a non-convex continuous relaxation of the ℓ(0) pseudo-norm is used, to reconstruct the external stimulus equally well. At a fixed sparsity level, both ℓ(0)- and ℓ(1)-based regularization develop units with receptive field (RF) shapes similar to biological neurons in V1 (and a subset of neurons in V2), but ℓ(0)-based regularization shows approximately five times better reconstruction of the stimulus. Our results in conjunction with recent metabolic findings indicate that for V1 to operate efficiently it should follow a coding regime which uses a regularization that is closer to the ℓ(0) pseudo-norm rather than the ℓ(1) one, and suggests a similar mode of operation for the sensory cortex in general.
format Online
Article
Text
id pubmed-10516432
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-105164322023-09-23 Beyond ℓ(1) sparse coding in V1 Rentzeperis, Ilias Calatroni, Luca Perrinet, Laurent U. Prandi, Dario PLoS Comput Biol Research Article Growing evidence indicates that only a sparse subset from a pool of sensory neurons is active for the encoding of visual stimuli at any instant in time. Traditionally, to replicate such biological sparsity, generative models have been using the ℓ(1) norm as a penalty due to its convexity, which makes it amenable to fast and simple algorithmic solvers. In this work, we use biological vision as a test-bed and show that the soft thresholding operation associated to the use of the ℓ(1) norm is highly suboptimal compared to other functions suited to approximating ℓ(p) with 0 ≤ p < 1 (including recently proposed continuous exact relaxations), in terms of performance. We show that ℓ(1) sparsity employs a pool with more neurons, i.e. has a higher degree of overcompleteness, in order to maintain the same reconstruction error as the other methods considered. More specifically, at the same sparsity level, the thresholding algorithm using the ℓ(1) norm as a penalty requires a dictionary of ten times more units compared to the proposed approach, where a non-convex continuous relaxation of the ℓ(0) pseudo-norm is used, to reconstruct the external stimulus equally well. At a fixed sparsity level, both ℓ(0)- and ℓ(1)-based regularization develop units with receptive field (RF) shapes similar to biological neurons in V1 (and a subset of neurons in V2), but ℓ(0)-based regularization shows approximately five times better reconstruction of the stimulus. Our results in conjunction with recent metabolic findings indicate that for V1 to operate efficiently it should follow a coding regime which uses a regularization that is closer to the ℓ(0) pseudo-norm rather than the ℓ(1) one, and suggests a similar mode of operation for the sensory cortex in general. Public Library of Science 2023-09-12 /pmc/articles/PMC10516432/ /pubmed/37699052 http://dx.doi.org/10.1371/journal.pcbi.1011459 Text en © 2023 Rentzeperis et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Rentzeperis, Ilias
Calatroni, Luca
Perrinet, Laurent U.
Prandi, Dario
Beyond ℓ(1) sparse coding in V1
title Beyond ℓ(1) sparse coding in V1
title_full Beyond ℓ(1) sparse coding in V1
title_fullStr Beyond ℓ(1) sparse coding in V1
title_full_unstemmed Beyond ℓ(1) sparse coding in V1
title_short Beyond ℓ(1) sparse coding in V1
title_sort beyond ℓ(1) sparse coding in v1
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10516432/
https://www.ncbi.nlm.nih.gov/pubmed/37699052
http://dx.doi.org/10.1371/journal.pcbi.1011459
work_keys_str_mv AT rentzeperisilias beyondl1sparsecodinginv1
AT calatroniluca beyondl1sparsecodinginv1
AT perrinetlaurentu beyondl1sparsecodinginv1
AT prandidario beyondl1sparsecodinginv1