Cargando…

IDSNN: Towards High-Performance and Low-Latency SNN Training via Initialization and Distillation

Spiking neural networks (SNNs) are widely recognized for their biomimetic and efficient computing features. They utilize spikes to encode and transmit information. Despite the many advantages of SNNs, they suffer from the problems of low accuracy and large inference latency, which are, respectively,...

Descripción completa

Detalles Bibliográficos
Autores principales: Fan, Xiongfei, Zhang, Hong, Zhang, Yu
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10452895/
https://www.ncbi.nlm.nih.gov/pubmed/37622980
http://dx.doi.org/10.3390/biomimetics8040375
Descripción
Sumario:Spiking neural networks (SNNs) are widely recognized for their biomimetic and efficient computing features. They utilize spikes to encode and transmit information. Despite the many advantages of SNNs, they suffer from the problems of low accuracy and large inference latency, which are, respectively, caused by the direct training and conversion from artificial neural network (ANN) training methods. Aiming to address these limitations, we propose a novel training pipeline (called IDSNN) based on parameter initialization and knowledge distillation, using ANN as a parameter source and teacher. IDSNN maximizes the knowledge extracted from ANNs and achieves competitive top-1 accuracy for CIFAR10 ([Formula: see text]) and CIFAR100 ([Formula: see text]) with low latency. More importantly, it can achieve [Formula: see text] faster convergence speed than directly training SNNs under limited training resources, which demonstrates its practical value in applications.