Cargando…

Energy Guided Diffusion for Generating Neurally Exciting Images

In recent years, most exciting inputs (MEIs) synthesized from encoding models of neuronal activity have become an established method to study tuning properties of biological and artificial visual systems. However, as we move up the visual hierarchy, the complexity of neuronal computations increases....

Descripción completa

Detalles Bibliográficos
Autores principales: Pierzchlewicz, Paweł A., Willeke, Konstantin F., Nix, Arne F., Elumalai, Pavithra, Restivo, Kelli, Shinn, Tori, Nealley, Cate, Rodriguez, Gabrielle, Patel, Saumil, Franke, Katrin, Tolias, Andreas S., Sinz, Fabian H.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Cold Spring Harbor Laboratory 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10245650/
https://www.ncbi.nlm.nih.gov/pubmed/37292670
http://dx.doi.org/10.1101/2023.05.18.541176
_version_ 1785054902298345472
author Pierzchlewicz, Paweł A.
Willeke, Konstantin F.
Nix, Arne F.
Elumalai, Pavithra
Restivo, Kelli
Shinn, Tori
Nealley, Cate
Rodriguez, Gabrielle
Patel, Saumil
Franke, Katrin
Tolias, Andreas S.
Sinz, Fabian H.
author_facet Pierzchlewicz, Paweł A.
Willeke, Konstantin F.
Nix, Arne F.
Elumalai, Pavithra
Restivo, Kelli
Shinn, Tori
Nealley, Cate
Rodriguez, Gabrielle
Patel, Saumil
Franke, Katrin
Tolias, Andreas S.
Sinz, Fabian H.
author_sort Pierzchlewicz, Paweł A.
collection PubMed
description In recent years, most exciting inputs (MEIs) synthesized from encoding models of neuronal activity have become an established method to study tuning properties of biological and artificial visual systems. However, as we move up the visual hierarchy, the complexity of neuronal computations increases. Consequently, it becomes more challenging to model neuronal activity, requiring more complex models. In this study, we introduce a new attention readout for a convolutional data-driven core for neurons in macaque V4 that outperforms the state-of-the-art task-driven ResNet model in predicting neuronal responses. However, as the predictive network becomes deeper and more complex, synthesizing MEIs via straightforward gradient ascent (GA) can struggle to produce qualitatively good results and overfit to idiosyncrasies of a more complex model, potentially decreasing the MEI’s model-to-brain transferability. To solve this problem, we propose a diffusion-based method for generating MEIs via Energy Guidance (EGG). We show that for models of macaque V4, EGG generates single neuron MEIs that generalize better across architectures than the state-of-the-art GA while preserving the within-architectures activation and requiring 4.7x less compute time. Furthermore, EGG diffusion can be used to generate other neurally exciting images, like most exciting natural images that are on par with a selection of highly activating natural images, or image reconstructions that generalize better across architectures. Finally, EGG is simple to implement, requires no retraining of the diffusion model, and can easily be generalized to provide other characterizations of the visual system, such as invariances. Thus EGG provides a general and flexible framework to study coding properties of the visual system in the context of natural images()
format Online
Article
Text
id pubmed-10245650
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Cold Spring Harbor Laboratory
record_format MEDLINE/PubMed
spelling pubmed-102456502023-06-08 Energy Guided Diffusion for Generating Neurally Exciting Images Pierzchlewicz, Paweł A. Willeke, Konstantin F. Nix, Arne F. Elumalai, Pavithra Restivo, Kelli Shinn, Tori Nealley, Cate Rodriguez, Gabrielle Patel, Saumil Franke, Katrin Tolias, Andreas S. Sinz, Fabian H. bioRxiv Article In recent years, most exciting inputs (MEIs) synthesized from encoding models of neuronal activity have become an established method to study tuning properties of biological and artificial visual systems. However, as we move up the visual hierarchy, the complexity of neuronal computations increases. Consequently, it becomes more challenging to model neuronal activity, requiring more complex models. In this study, we introduce a new attention readout for a convolutional data-driven core for neurons in macaque V4 that outperforms the state-of-the-art task-driven ResNet model in predicting neuronal responses. However, as the predictive network becomes deeper and more complex, synthesizing MEIs via straightforward gradient ascent (GA) can struggle to produce qualitatively good results and overfit to idiosyncrasies of a more complex model, potentially decreasing the MEI’s model-to-brain transferability. To solve this problem, we propose a diffusion-based method for generating MEIs via Energy Guidance (EGG). We show that for models of macaque V4, EGG generates single neuron MEIs that generalize better across architectures than the state-of-the-art GA while preserving the within-architectures activation and requiring 4.7x less compute time. Furthermore, EGG diffusion can be used to generate other neurally exciting images, like most exciting natural images that are on par with a selection of highly activating natural images, or image reconstructions that generalize better across architectures. Finally, EGG is simple to implement, requires no retraining of the diffusion model, and can easily be generalized to provide other characterizations of the visual system, such as invariances. Thus EGG provides a general and flexible framework to study coding properties of the visual system in the context of natural images() Cold Spring Harbor Laboratory 2023-05-20 /pmc/articles/PMC10245650/ /pubmed/37292670 http://dx.doi.org/10.1101/2023.05.18.541176 Text en https://creativecommons.org/licenses/by-nc/4.0/This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (https://creativecommons.org/licenses/by-nc/4.0/) , which allows reusers to distribute, remix, adapt, and build upon the material in any medium or format for noncommercial purposes only, and only so long as attribution is given to the creator.
spellingShingle Article
Pierzchlewicz, Paweł A.
Willeke, Konstantin F.
Nix, Arne F.
Elumalai, Pavithra
Restivo, Kelli
Shinn, Tori
Nealley, Cate
Rodriguez, Gabrielle
Patel, Saumil
Franke, Katrin
Tolias, Andreas S.
Sinz, Fabian H.
Energy Guided Diffusion for Generating Neurally Exciting Images
title Energy Guided Diffusion for Generating Neurally Exciting Images
title_full Energy Guided Diffusion for Generating Neurally Exciting Images
title_fullStr Energy Guided Diffusion for Generating Neurally Exciting Images
title_full_unstemmed Energy Guided Diffusion for Generating Neurally Exciting Images
title_short Energy Guided Diffusion for Generating Neurally Exciting Images
title_sort energy guided diffusion for generating neurally exciting images
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10245650/
https://www.ncbi.nlm.nih.gov/pubmed/37292670
http://dx.doi.org/10.1101/2023.05.18.541176
work_keys_str_mv AT pierzchlewiczpaweła energyguideddiffusionforgeneratingneurallyexcitingimages
AT willekekonstantinf energyguideddiffusionforgeneratingneurallyexcitingimages
AT nixarnef energyguideddiffusionforgeneratingneurallyexcitingimages
AT elumalaipavithra energyguideddiffusionforgeneratingneurallyexcitingimages
AT restivokelli energyguideddiffusionforgeneratingneurallyexcitingimages
AT shinntori energyguideddiffusionforgeneratingneurallyexcitingimages
AT nealleycate energyguideddiffusionforgeneratingneurallyexcitingimages
AT rodriguezgabrielle energyguideddiffusionforgeneratingneurallyexcitingimages
AT patelsaumil energyguideddiffusionforgeneratingneurallyexcitingimages
AT frankekatrin energyguideddiffusionforgeneratingneurallyexcitingimages
AT toliasandreass energyguideddiffusionforgeneratingneurallyexcitingimages
AT sinzfabianh energyguideddiffusionforgeneratingneurallyexcitingimages