Cargando…
Memory-Replay Knowledge Distillation
Knowledge Distillation (KD), which transfers the knowledge from a teacher to a student network by penalizing their Kullback–Leibler (KL) divergence, is a widely used tool for Deep Neural Network (DNN) compression in intelligent sensor systems. Traditional KD uses pre-trained teacher, while self-KD d...
Autores principales: | Wang, Jiyue, Zhang, Pei, Li, Yanxiong |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8071405/ https://www.ncbi.nlm.nih.gov/pubmed/33921068 http://dx.doi.org/10.3390/s21082792 |
Ejemplares similares
-
A neural network account of memory replay and knowledge consolidation
por: Barry, Daniel N, et al.
Publicado: (2022) -
RS-SSKD: Self-Supervision Equipped with Knowledge Distillation for Few-Shot Remote Sensing Scene Classification
por: Zhang, Pei, et al.
Publicado: (2021) -
The Role of Hippocampal Replay in Memory and Planning
por: Ólafsdóttir, H. Freyja, et al.
Publicado: (2018) -
Memory replay in balanced recurrent networks
por: Chenkov, Nikolay, et al.
Publicado: (2017) -
Memory trace replay: the shaping of memory consolidation by neuromodulation
por: Atherton, Laura A., et al.
Publicado: (2015)