Cargando…

Memory-Replay Knowledge Distillation

Knowledge Distillation (KD), which transfers the knowledge from a teacher to a student network by penalizing their Kullback–Leibler (KL) divergence, is a widely used tool for Deep Neural Network (DNN) compression in intelligent sensor systems. Traditional KD uses pre-trained teacher, while self-KD d...

Descripción completa

Detalles Bibliográficos
Autores principales: Wang, Jiyue, Zhang, Pei, Li, Yanxiong
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8071405/
https://www.ncbi.nlm.nih.gov/pubmed/33921068
http://dx.doi.org/10.3390/s21082792

Ejemplares similares