Cargando…
AoCStream: All-on-Chip CNN Accelerator with Stream-Based Line-Buffer Architecture and Accelerator-Aware Pruning
Convolutional neural networks (CNNs) play a crucial role in many EdgeAI and TinyML applications, but their implementation usually requires external memory, which degrades the feasibility of such resource-hungry environments. To solve this problem, this paper proposes memory-reduction methods at the...
Autores principales: | Kang, Hyeong-Ju, Yang, Byung-Do |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10575357/ https://www.ncbi.nlm.nih.gov/pubmed/37836934 http://dx.doi.org/10.3390/s23198104 |
Ejemplares similares
-
Embedding channel pruning within the CNN architecture design using a bi-level evolutionary approach
por: Louati, Hassen, et al.
Publicado: (2023) -
Solar Power Prediction Using Dual Stream CNN-LSTM Architecture
por: Alharkan, Hamad, et al.
Publicado: (2023) -
Retrain or Not Retrain? - Efficient Pruning Methods of Deep CNN Networks
por: Pietron, Marcin, et al.
Publicado: (2020) -
Diagnosis of Lumbar Spondylolisthesis Using a Pruned CNN Model
por: Saravagi, Deepika, et al.
Publicado: (2022) -
Two-Stream Problems in Accelerators
por: Zimmermann, Frank, et al.
Publicado: (2001)