Cargando…

An ASIP for Neural Network Inference on Embedded Devices with 99% PE Utilization and 100% Memory Hidden under Low Silicon Cost

The computation efficiency and flexibility of the accelerator hinder deep neural network (DNN) implementation in embedded applications. Although there are many publications on deep neural network (DNN) processors, there is still much room for deep optimization to further improve results. Multiple di...

Descripción completa

Detalles Bibliográficos
Autores principales: Gao, Muxuan, Chen, He, Liu, Dake
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9146143/
https://www.ncbi.nlm.nih.gov/pubmed/35632250
http://dx.doi.org/10.3390/s22103841

Ejemplares similares