Cargando…

Comparing feedforward neural networks using independent component analysis on hidden units

Neural networks are widely used for classification and regression tasks, but they do not always perform well, nor explicitly inform us of the rationale for their predictions. In this study we propose a novel method of comparing a pair of different feedforward neural networks, which draws on independ...

Descripción completa

Detalles Bibliográficos
Autores principales: Satoh, Seiya, Yamagishi, Kenta, Takahashi, Tatsuji
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10449181/
https://www.ncbi.nlm.nih.gov/pubmed/37616212
http://dx.doi.org/10.1371/journal.pone.0290435
_version_ 1785094894237253632
author Satoh, Seiya
Yamagishi, Kenta
Takahashi, Tatsuji
author_facet Satoh, Seiya
Yamagishi, Kenta
Takahashi, Tatsuji
author_sort Satoh, Seiya
collection PubMed
description Neural networks are widely used for classification and regression tasks, but they do not always perform well, nor explicitly inform us of the rationale for their predictions. In this study we propose a novel method of comparing a pair of different feedforward neural networks, which draws on independent components obtained by independent component analysis (ICA) on the hidden layers of these networks. It can compare different feedforward neural networks even when they have different structures, as well as feedforward neural networks that learned partially different datasets, yielding insights into their functionality or performance. We evaluate the proposed method by conducting three experiments with feedforward neural networks that have one hidden layer, and verify whether a pair of feedforward neural networks can be compared by the proposed method when the numbers of hidden units in the layer are different, when the datasets are partially different, and when activation functions are different. The results show that similar independent components are extracted from two feedforward neural networks, even when the three circumstances above are different. Our experiments also reveal that mere comparison of weights or activations does not lead to identifying similar relationships. Through the extraction of independent components, the proposed method can assess whether the internal processing of one neural network resembles that of another. This approach has the potential to help understand the performance of neural networks.
format Online
Article
Text
id pubmed-10449181
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-104491812023-08-25 Comparing feedforward neural networks using independent component analysis on hidden units Satoh, Seiya Yamagishi, Kenta Takahashi, Tatsuji PLoS One Research Article Neural networks are widely used for classification and regression tasks, but they do not always perform well, nor explicitly inform us of the rationale for their predictions. In this study we propose a novel method of comparing a pair of different feedforward neural networks, which draws on independent components obtained by independent component analysis (ICA) on the hidden layers of these networks. It can compare different feedforward neural networks even when they have different structures, as well as feedforward neural networks that learned partially different datasets, yielding insights into their functionality or performance. We evaluate the proposed method by conducting three experiments with feedforward neural networks that have one hidden layer, and verify whether a pair of feedforward neural networks can be compared by the proposed method when the numbers of hidden units in the layer are different, when the datasets are partially different, and when activation functions are different. The results show that similar independent components are extracted from two feedforward neural networks, even when the three circumstances above are different. Our experiments also reveal that mere comparison of weights or activations does not lead to identifying similar relationships. Through the extraction of independent components, the proposed method can assess whether the internal processing of one neural network resembles that of another. This approach has the potential to help understand the performance of neural networks. Public Library of Science 2023-08-24 /pmc/articles/PMC10449181/ /pubmed/37616212 http://dx.doi.org/10.1371/journal.pone.0290435 Text en © 2023 Satoh et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Satoh, Seiya
Yamagishi, Kenta
Takahashi, Tatsuji
Comparing feedforward neural networks using independent component analysis on hidden units
title Comparing feedforward neural networks using independent component analysis on hidden units
title_full Comparing feedforward neural networks using independent component analysis on hidden units
title_fullStr Comparing feedforward neural networks using independent component analysis on hidden units
title_full_unstemmed Comparing feedforward neural networks using independent component analysis on hidden units
title_short Comparing feedforward neural networks using independent component analysis on hidden units
title_sort comparing feedforward neural networks using independent component analysis on hidden units
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10449181/
https://www.ncbi.nlm.nih.gov/pubmed/37616212
http://dx.doi.org/10.1371/journal.pone.0290435
work_keys_str_mv AT satohseiya comparingfeedforwardneuralnetworksusingindependentcomponentanalysisonhiddenunits
AT yamagishikenta comparingfeedforwardneuralnetworksusingindependentcomponentanalysisonhiddenunits
AT takahashitatsuji comparingfeedforwardneuralnetworksusingindependentcomponentanalysisonhiddenunits