Cargando…

Efficient neural codes naturally emerge through gradient descent learning

Human sensory systems are more sensitive to common features in the environment than uncommon features. For example, small deviations from the more frequently encountered horizontal orientations can be more easily detected than small deviations from the less frequent diagonal ones. Here we find that...

Descripción completa

Detalles Bibliográficos
Autores principales: Benjamin, Ari S., Zhang, Ling-Qi, Qiu, Cheng, Stocker, Alan A., Kording, Konrad P.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9800366/
https://www.ncbi.nlm.nih.gov/pubmed/36581618
http://dx.doi.org/10.1038/s41467-022-35659-7
Descripción
Sumario:Human sensory systems are more sensitive to common features in the environment than uncommon features. For example, small deviations from the more frequently encountered horizontal orientations can be more easily detected than small deviations from the less frequent diagonal ones. Here we find that artificial neural networks trained to recognize objects also have patterns of sensitivity that match the statistics of features in images. To interpret these findings, we show mathematically that learning with gradient descent in neural networks preferentially creates representations that are more sensitive to common features, a hallmark of efficient coding. This effect occurs in systems with otherwise unconstrained coding resources, and additionally when learning towards both supervised and unsupervised objectives. This result demonstrates that efficient codes can naturally emerge from gradient-like learning.