Cargando…

Enhancing the accuracies by performing pooling decisions adjacent to the output layer

Learning classification tasks of [Formula: see text] inputs typically consist of [Formula: see text] ) max-pooling (MP) operators along the entire feedforward deep architecture. Here we show, using the CIFAR-10 database, that pooling decisions adjacent to the last convolutional layer significantly e...

Descripción completa

Detalles Bibliográficos
Autores principales: Meir, Yuval, Tzach, Yarden, Gross, Ronit D., Tevet, Ofek, Vardi, Roni, Kanter, Ido
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10471572/
https://www.ncbi.nlm.nih.gov/pubmed/37652973
http://dx.doi.org/10.1038/s41598-023-40566-y
Descripción
Sumario:Learning classification tasks of [Formula: see text] inputs typically consist of [Formula: see text] ) max-pooling (MP) operators along the entire feedforward deep architecture. Here we show, using the CIFAR-10 database, that pooling decisions adjacent to the last convolutional layer significantly enhance accuracies. In particular, average accuracies of the advanced-VGG with [Formula: see text] layers (A-VGGm) architectures are 0.936, 0.940, 0.954, 0.955, and 0.955 for m = 6, 8, 14, 13, and 16, respectively. The results indicate A-VGG8’s accuracy is superior to VGG16’s, and that the accuracies of A-VGG13 and A-VGG16 are equal, and comparable to that of Wide-ResNet16. In addition, replacing the three fully connected (FC) layers with one FC layer, A-VGG6 and A-VGG14, or with several linear activation FC layers, yielded similar accuracies. These significantly enhanced accuracies stem from training the most influential input–output routes, in comparison to the inferior routes selected following multiple MP decisions along the deep architecture. In addition, accuracies are sensitive to the order of the non-commutative MP and average pooling operators adjacent to the output layer, varying the number and location of training routes. The results call for the reexamination of previously proposed deep architectures and their accuracies by utilizing the proposed pooling strategy adjacent to the output layer.