Cargando…
Improving Network Training on Resource-Constrained Devices via Habituation Normalization †
As a technique for accelerating and stabilizing training, the batch normalization (BN) is widely used in deep learning. However, BN cannot effectively estimate the mean and the variance of samples when training/fine-tuning with small batches of data on resource-constrained devices. It will lead to a...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9783687/ https://www.ncbi.nlm.nih.gov/pubmed/36560310 http://dx.doi.org/10.3390/s22249940 |
_version_ | 1784857637034131456 |
---|---|
author | Lai, Huixia Zhang, Lulu Zhang, Shi |
author_facet | Lai, Huixia Zhang, Lulu Zhang, Shi |
author_sort | Lai, Huixia |
collection | PubMed |
description | As a technique for accelerating and stabilizing training, the batch normalization (BN) is widely used in deep learning. However, BN cannot effectively estimate the mean and the variance of samples when training/fine-tuning with small batches of data on resource-constrained devices. It will lead to a decrease in the accuracy of the deep learning model. In the fruit fly olfactory system, the algorithm based on the “negative image” habituation model can filter redundant information and improve numerical stability. Inspired by the circuit mechanism, we propose a novel normalization method, the habituation normalization (HN). HN first eliminates the “negative image” obtained by habituation and then calculates the statistics for normalizing. It solves the problem of accuracy degradation of BN when the batch size is small. The experiment results show that HN can speed up neural network training and improve the model accuracy on vanilla LeNet-5, VGG16, and ResNet-50 in the Fashion MNIST and CIFAR10 datasets. Compared with four standard normalization methods, HN keeps stable and high accuracy in different batch sizes, which shows that HN has strong robustness. Finally, the applying HN to the deep learning-based EEG signal application system indicates that HN is suitable for the network fine-tuning and neural network applications under limited computing power and memory. |
format | Online Article Text |
id | pubmed-9783687 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-97836872022-12-24 Improving Network Training on Resource-Constrained Devices via Habituation Normalization † Lai, Huixia Zhang, Lulu Zhang, Shi Sensors (Basel) Article As a technique for accelerating and stabilizing training, the batch normalization (BN) is widely used in deep learning. However, BN cannot effectively estimate the mean and the variance of samples when training/fine-tuning with small batches of data on resource-constrained devices. It will lead to a decrease in the accuracy of the deep learning model. In the fruit fly olfactory system, the algorithm based on the “negative image” habituation model can filter redundant information and improve numerical stability. Inspired by the circuit mechanism, we propose a novel normalization method, the habituation normalization (HN). HN first eliminates the “negative image” obtained by habituation and then calculates the statistics for normalizing. It solves the problem of accuracy degradation of BN when the batch size is small. The experiment results show that HN can speed up neural network training and improve the model accuracy on vanilla LeNet-5, VGG16, and ResNet-50 in the Fashion MNIST and CIFAR10 datasets. Compared with four standard normalization methods, HN keeps stable and high accuracy in different batch sizes, which shows that HN has strong robustness. Finally, the applying HN to the deep learning-based EEG signal application system indicates that HN is suitable for the network fine-tuning and neural network applications under limited computing power and memory. MDPI 2022-12-16 /pmc/articles/PMC9783687/ /pubmed/36560310 http://dx.doi.org/10.3390/s22249940 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Lai, Huixia Zhang, Lulu Zhang, Shi Improving Network Training on Resource-Constrained Devices via Habituation Normalization † |
title | Improving Network Training on Resource-Constrained Devices via Habituation Normalization † |
title_full | Improving Network Training on Resource-Constrained Devices via Habituation Normalization † |
title_fullStr | Improving Network Training on Resource-Constrained Devices via Habituation Normalization † |
title_full_unstemmed | Improving Network Training on Resource-Constrained Devices via Habituation Normalization † |
title_short | Improving Network Training on Resource-Constrained Devices via Habituation Normalization † |
title_sort | improving network training on resource-constrained devices via habituation normalization † |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9783687/ https://www.ncbi.nlm.nih.gov/pubmed/36560310 http://dx.doi.org/10.3390/s22249940 |
work_keys_str_mv | AT laihuixia improvingnetworktrainingonresourceconstraineddevicesviahabituationnormalization AT zhanglulu improvingnetworktrainingonresourceconstraineddevicesviahabituationnormalization AT zhangshi improvingnetworktrainingonresourceconstraineddevicesviahabituationnormalization |