Cargando…
A Novel Low-Bit Quantization Strategy for Compressing Deep Neural Networks
The increase in sophistication of neural network models in recent years has exponentially expanded memory consumption and computational cost, thereby hindering their applications on ASIC, FPGA, and other mobile devices. Therefore, compressing and accelerating the neural networks are necessary. In th...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Hindawi
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7049432/ https://www.ncbi.nlm.nih.gov/pubmed/32148472 http://dx.doi.org/10.1155/2020/7839064 |