Cargando…

A pseudo-softmax function for hardware-based high speed image classification

In this work a novel architecture, named pseudo-softmax, to compute an approximated form of the softmax function is presented. This architecture can be fruitfully used in the last layer of Neural Networks and Convolutional Neural Networks for classification tasks, and in Reinforcement Learning hardw...

Descripción completa

Detalles Bibliográficos
Autores principales: Cardarilli, Gian Carlo, Di Nunzio, Luca, Fazzolari, Rocco, Giardino, Daniele, Nannarelli, Alberto, Re, Marco, Spanò, Sergio
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8319144/
https://www.ncbi.nlm.nih.gov/pubmed/34321514
http://dx.doi.org/10.1038/s41598-021-94691-7
Descripción
Sumario:In this work a novel architecture, named pseudo-softmax, to compute an approximated form of the softmax function is presented. This architecture can be fruitfully used in the last layer of Neural Networks and Convolutional Neural Networks for classification tasks, and in Reinforcement Learning hardware accelerators to compute the Boltzmann action-selection policy. The proposed pseudo-softmax design, intended for efficient hardware implementation, exploits the typical integer quantization of hardware-based Neural Networks obtaining an accurate approximation of the result. In the paper, a detailed description of the architecture is given and an extensive analysis of the approximation error is performed by using both custom stimuli and real-world Convolutional Neural Networks inputs. The implementation results, based on CMOS standard-cell technology, compared to state-of-the-art architectures show reduced approximation errors.