Cargando…

Number of necessary training examples for Neural Networks with different number of trainable parameters

In this work, the network complexity should be reduced with a concomitant reduction in the number of necessary training examples. The focus thus was on the dependence of proper evaluation metrics on the number of adjustable parameters of the considered deep neural network. The used data set encompas...

Descripción completa

Detalles Bibliográficos
Autores principales: Götz, Th.I., Göb, S., Sawant, S., Erick, X.F., Wittenberg, T., Schmidkonz, C., Tomé, A.M., Lang, E.W., Ramming, A.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9577052/
https://www.ncbi.nlm.nih.gov/pubmed/36268092
http://dx.doi.org/10.1016/j.jpi.2022.100114
_version_ 1784811670279815168
author Götz, Th.I.
Göb, S.
Sawant, S.
Erick, X.F.
Wittenberg, T.
Schmidkonz, C.
Tomé, A.M.
Lang, E.W.
Ramming, A.
author_facet Götz, Th.I.
Göb, S.
Sawant, S.
Erick, X.F.
Wittenberg, T.
Schmidkonz, C.
Tomé, A.M.
Lang, E.W.
Ramming, A.
author_sort Götz, Th.I.
collection PubMed
description In this work, the network complexity should be reduced with a concomitant reduction in the number of necessary training examples. The focus thus was on the dependence of proper evaluation metrics on the number of adjustable parameters of the considered deep neural network. The used data set encompassed Hematoxylin and Eosin (H&E) colored cell images provided by various clinics. We used a deep convolutional neural network to get the relation between a model’s complexity, its concomitant set of parameters, and the size of the training sample necessary to achieve a certain classification accuracy. The complexity of the deep neural networks was reduced by pruning a certain amount of filters in the network. As expected, the unpruned neural network showed best performance. The network with the highest number of trainable parameter achieved, within the estimated standard error of the optimized cross-entropy loss, best results up to 30% pruning. Strongly pruned networks are highly viable and the classification accuracy declines quickly with decreasing number of training patterns. However, up to a pruning ratio of 40%, we found a comparable performance of pruned and unpruned deep convolutional neural networks (DCNN) and densely connected convolutional networks (DCCN).
format Online
Article
Text
id pubmed-9577052
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-95770522022-10-19 Number of necessary training examples for Neural Networks with different number of trainable parameters Götz, Th.I. Göb, S. Sawant, S. Erick, X.F. Wittenberg, T. Schmidkonz, C. Tomé, A.M. Lang, E.W. Ramming, A. J Pathol Inform Original Research Article In this work, the network complexity should be reduced with a concomitant reduction in the number of necessary training examples. The focus thus was on the dependence of proper evaluation metrics on the number of adjustable parameters of the considered deep neural network. The used data set encompassed Hematoxylin and Eosin (H&E) colored cell images provided by various clinics. We used a deep convolutional neural network to get the relation between a model’s complexity, its concomitant set of parameters, and the size of the training sample necessary to achieve a certain classification accuracy. The complexity of the deep neural networks was reduced by pruning a certain amount of filters in the network. As expected, the unpruned neural network showed best performance. The network with the highest number of trainable parameter achieved, within the estimated standard error of the optimized cross-entropy loss, best results up to 30% pruning. Strongly pruned networks are highly viable and the classification accuracy declines quickly with decreasing number of training patterns. However, up to a pruning ratio of 40%, we found a comparable performance of pruned and unpruned deep convolutional neural networks (DCNN) and densely connected convolutional networks (DCCN). Elsevier 2022-07-06 /pmc/articles/PMC9577052/ /pubmed/36268092 http://dx.doi.org/10.1016/j.jpi.2022.100114 Text en © 2022 Published by Elsevier Inc. on behalf of Association for Pathology Informatics. https://creativecommons.org/licenses/by-nc-nd/4.0/This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
spellingShingle Original Research Article
Götz, Th.I.
Göb, S.
Sawant, S.
Erick, X.F.
Wittenberg, T.
Schmidkonz, C.
Tomé, A.M.
Lang, E.W.
Ramming, A.
Number of necessary training examples for Neural Networks with different number of trainable parameters
title Number of necessary training examples for Neural Networks with different number of trainable parameters
title_full Number of necessary training examples for Neural Networks with different number of trainable parameters
title_fullStr Number of necessary training examples for Neural Networks with different number of trainable parameters
title_full_unstemmed Number of necessary training examples for Neural Networks with different number of trainable parameters
title_short Number of necessary training examples for Neural Networks with different number of trainable parameters
title_sort number of necessary training examples for neural networks with different number of trainable parameters
topic Original Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9577052/
https://www.ncbi.nlm.nih.gov/pubmed/36268092
http://dx.doi.org/10.1016/j.jpi.2022.100114
work_keys_str_mv AT gotzthi numberofnecessarytrainingexamplesforneuralnetworkswithdifferentnumberoftrainableparameters
AT gobs numberofnecessarytrainingexamplesforneuralnetworkswithdifferentnumberoftrainableparameters
AT sawants numberofnecessarytrainingexamplesforneuralnetworkswithdifferentnumberoftrainableparameters
AT erickxf numberofnecessarytrainingexamplesforneuralnetworkswithdifferentnumberoftrainableparameters
AT wittenbergt numberofnecessarytrainingexamplesforneuralnetworkswithdifferentnumberoftrainableparameters
AT schmidkonzc numberofnecessarytrainingexamplesforneuralnetworkswithdifferentnumberoftrainableparameters
AT tomeam numberofnecessarytrainingexamplesforneuralnetworkswithdifferentnumberoftrainableparameters
AT langew numberofnecessarytrainingexamplesforneuralnetworkswithdifferentnumberoftrainableparameters
AT ramminga numberofnecessarytrainingexamplesforneuralnetworkswithdifferentnumberoftrainableparameters