Cargando…
Investigation of Deep Spiking Neural Networks Utilizing Gated Schottky Diode as Synaptic Devices
Deep learning produces a remarkable performance in various applications such as image classification and speech recognition. However, state-of-the-art deep neural networks require a large number of weights and enormous computation power, which results in a bottleneck of efficiency for edge-device ap...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9696336/ https://www.ncbi.nlm.nih.gov/pubmed/36363821 http://dx.doi.org/10.3390/mi13111800 |
Sumario: | Deep learning produces a remarkable performance in various applications such as image classification and speech recognition. However, state-of-the-art deep neural networks require a large number of weights and enormous computation power, which results in a bottleneck of efficiency for edge-device applications. To resolve these problems, deep spiking neural networks (DSNNs) have been proposed, given the specialized synapse and neuron hardware. In this work, the hardware neuromorphic system of DSNNs with gated Schottky diodes was investigated. Gated Schottky diodes have a near-linear conductance response, which can easily implement quantized weights in synaptic devices. Based on modeling of synaptic devices, two-layer fully connected neural networks are trained by off-chip learning. The adaptation of a neuron’s threshold is proposed to reduce the accuracy degradation caused by the conversion from analog neural networks (ANNs) to event-driven DSNNs. Using left-justified rate coding as an input encoding method enables low-latency classification. The effect of device variation and noisy images to the classification accuracy is investigated. The time-to-first-spike (TTFS) scheme can significantly reduce power consumption by reducing the number of firing spikes compared to a max-firing scheme. |
---|