Cargando…

Ps and Qs: Quantization-Aware Pruning for Efficient Low Latency Neural Network Inference

Efficient machine learning implementations optimized for inference in hardware have wide-ranging benefits, depending on the application, from lower inference latency to higher data throughput and reduced energy consumption. Two popular techniques for reducing computation in neural networks are pruni...

Descripción completa

Detalles Bibliográficos
Autores principales: Hawks, Benjamin, Duarte, Javier, Fraser, Nicholas J., Pappalardo, Alessandro, Tran, Nhan, Umuroglu, Yaman
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8299073/
https://www.ncbi.nlm.nih.gov/pubmed/34308339
http://dx.doi.org/10.3389/frai.2021.676564