Cargando…
A Post-training Quantization Method for the Design of Fixed-Point-Based FPGA/ASIC Hardware Accelerators for LSTM/GRU Algorithms
Recurrent Neural Networks (RNNs) have become important tools for tasks such as speech recognition, text generation, or natural language processing. However, their inference may involve up to billions of operations and their large number of parameters leads to large storage size and runtime memory us...
Autores principales: | Rapuano, Emilio, Pacini, Tommaso, Fanucci, Luca |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Hindawi
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9117057/ https://www.ncbi.nlm.nih.gov/pubmed/35602644 http://dx.doi.org/10.1155/2022/9485933 |
Ejemplares similares
-
FPGA-based hardware accelerators
por: Skliarova, Iouliia, et al.
Publicado: (2019) -
Customizable FPGA-Based Hardware Accelerator for Standard Convolution Processes Empowered with Quantization Applied to LiDAR Data
por: Silva, João, et al.
Publicado: (2022) -
Predicting Energy Consumption Using LSTM, Multi-Layer GRU and Drop-GRU Neural Networks
por: Mahjoub, Sameh, et al.
Publicado: (2022) -
Predictions for COVID-19 with deep learning models of LSTM, GRU and Bi-LSTM
por: Shahid, Farah, et al.
Publicado: (2020) -
Attention based GRU-LSTM for software defect prediction
por: Munir, Hafiz Shahbaz, et al.
Publicado: (2021)