Cargando…
TorchLens: A Python package for extracting and visualizing hidden activations of PyTorch models
Deep neural network models (DNNs) are essential to modern AI and provide powerful models of information processing in biological neural networks. Researchers in both neuroscience and engineering are pursuing a better understanding of the internal representations and operations that undergird the suc...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Cold Spring Harbor Laboratory
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10055035/ https://www.ncbi.nlm.nih.gov/pubmed/36993311 http://dx.doi.org/10.1101/2023.03.16.532916 |
_version_ | 1785015809163132928 |
---|---|
author | Taylor, JohnMark Kriegeskorte, Nikolaus |
author_facet | Taylor, JohnMark Kriegeskorte, Nikolaus |
author_sort | Taylor, JohnMark |
collection | PubMed |
description | Deep neural network models (DNNs) are essential to modern AI and provide powerful models of information processing in biological neural networks. Researchers in both neuroscience and engineering are pursuing a better understanding of the internal representations and operations that undergird the successes and failures of DNNs. Neuroscientists additionally evaluate DNNs as models of brain computation by comparing their internal representations to those found in brains. It is therefore essential to have a method to easily and exhaustively extract and characterize the results of the internal operations of any DNN. Many models are implemented in PyTorch, the leading framework for building DNN models. Here we introduce TorchLens, a new open-source Python package for extracting and characterizing hidden-layer activations in PyTorch models. Uniquely among existing approaches to this problem, TorchLens has the following features: (1) it exhaustively extracts the results of all intermediate operations, not just those associated with PyTorch module objects, yielding a full record of every step in the model’s computational graph, (2) it provides an intuitive visualization of the model’s complete computational graph along with metadata about each computational step in a model’s forward pass for further analysis, (3) it contains a built-in validation procedure to algorithmically verify the accuracy of all saved hidden-layer activations, and (4) the approach it uses can be automatically applied to any PyTorch model with no modifications, including models with conditional (if-then) logic in their forward pass, recurrent models, branching models where layer outputs are fed into multiple subsequent layers in parallel, and models with internally generated tensors (e.g., injections of noise). Furthermore, using TorchLens requires minimal additional code, making it easy to incorporate into existing pipelines for model development and analysis, and useful as a pedagogical aid when teaching deep learning concepts. We hope this contribution will help researchers in AI and neuroscience understand the internal representations of DNNs. |
format | Online Article Text |
id | pubmed-10055035 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Cold Spring Harbor Laboratory |
record_format | MEDLINE/PubMed |
spelling | pubmed-100550352023-03-30 TorchLens: A Python package for extracting and visualizing hidden activations of PyTorch models Taylor, JohnMark Kriegeskorte, Nikolaus bioRxiv Article Deep neural network models (DNNs) are essential to modern AI and provide powerful models of information processing in biological neural networks. Researchers in both neuroscience and engineering are pursuing a better understanding of the internal representations and operations that undergird the successes and failures of DNNs. Neuroscientists additionally evaluate DNNs as models of brain computation by comparing their internal representations to those found in brains. It is therefore essential to have a method to easily and exhaustively extract and characterize the results of the internal operations of any DNN. Many models are implemented in PyTorch, the leading framework for building DNN models. Here we introduce TorchLens, a new open-source Python package for extracting and characterizing hidden-layer activations in PyTorch models. Uniquely among existing approaches to this problem, TorchLens has the following features: (1) it exhaustively extracts the results of all intermediate operations, not just those associated with PyTorch module objects, yielding a full record of every step in the model’s computational graph, (2) it provides an intuitive visualization of the model’s complete computational graph along with metadata about each computational step in a model’s forward pass for further analysis, (3) it contains a built-in validation procedure to algorithmically verify the accuracy of all saved hidden-layer activations, and (4) the approach it uses can be automatically applied to any PyTorch model with no modifications, including models with conditional (if-then) logic in their forward pass, recurrent models, branching models where layer outputs are fed into multiple subsequent layers in parallel, and models with internally generated tensors (e.g., injections of noise). Furthermore, using TorchLens requires minimal additional code, making it easy to incorporate into existing pipelines for model development and analysis, and useful as a pedagogical aid when teaching deep learning concepts. We hope this contribution will help researchers in AI and neuroscience understand the internal representations of DNNs. Cold Spring Harbor Laboratory 2023-03-18 /pmc/articles/PMC10055035/ /pubmed/36993311 http://dx.doi.org/10.1101/2023.03.16.532916 Text en https://creativecommons.org/licenses/by/4.0/This work is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/) , which allows reusers to distribute, remix, adapt, and build upon the material in any medium or format, so long as attribution is given to the creator. The license allows for commercial use. |
spellingShingle | Article Taylor, JohnMark Kriegeskorte, Nikolaus TorchLens: A Python package for extracting and visualizing hidden activations of PyTorch models |
title | TorchLens: A Python package for extracting and visualizing hidden activations of PyTorch models |
title_full | TorchLens: A Python package for extracting and visualizing hidden activations of PyTorch models |
title_fullStr | TorchLens: A Python package for extracting and visualizing hidden activations of PyTorch models |
title_full_unstemmed | TorchLens: A Python package for extracting and visualizing hidden activations of PyTorch models |
title_short | TorchLens: A Python package for extracting and visualizing hidden activations of PyTorch models |
title_sort | torchlens: a python package for extracting and visualizing hidden activations of pytorch models |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10055035/ https://www.ncbi.nlm.nih.gov/pubmed/36993311 http://dx.doi.org/10.1101/2023.03.16.532916 |
work_keys_str_mv | AT taylorjohnmark torchlensapythonpackageforextractingandvisualizinghiddenactivationsofpytorchmodels AT kriegeskortenikolaus torchlensapythonpackageforextractingandvisualizinghiddenactivationsofpytorchmodels |