Cargando…

A Novel Low-Bit Quantization Strategy for Compressing Deep Neural Networks

The increase in sophistication of neural network models in recent years has exponentially expanded memory consumption and computational cost, thereby hindering their applications on ASIC, FPGA, and other mobile devices. Therefore, compressing and accelerating the neural networks are necessary. In th...

Descripción completa

Detalles Bibliográficos
Autores principales: Long, Xin, Zeng, XiangRong, Ben, Zongcheng, Zhou, Dianle, Zhang, Maojun
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Hindawi 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7049432/
https://www.ncbi.nlm.nih.gov/pubmed/32148472
http://dx.doi.org/10.1155/2020/7839064

Ejemplares similares