Cargando…
Prevalence of neural collapse during the terminal phase of deep learning training
Modern practice for training classification deepnets involves a terminal phase of training (TPT), which begins at the epoch where training error first vanishes. During TPT, the training error stays effectively zero, while training loss is pushed toward zero. Direct measurements of TPT, for three pro...
Autores principales: | Papyan, Vardan, Han, X. Y., Donoho, David L. |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
National Academy of Sciences
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7547234/ https://www.ncbi.nlm.nih.gov/pubmed/32958680 http://dx.doi.org/10.1073/pnas.2015509117 |
Ejemplares similares
-
Exploring deep neural networks via layer-peeled model: Minority collapse in imbalanced training
por: Fang, Cong, et al.
Publicado: (2021) -
Expert surgeons and deep learning models can predict the outcome of surgical hemorrhage from 1 min of video
por: Pangal, Dhiraj J., et al.
Publicado: (2022) -
Dynamics in Deep Classifiers Trained with the Square Loss: Normalization, Low Rank, Neural Collapse, and Generalization Bounds
por: Xu, Mengjia, et al.
Publicado: (2023) -
Utility of the Simulated Outcomes Following Carotid Artery Laceration Video Data Set for Machine Learning Applications
por: Kugener, Guillaume, et al.
Publicado: (2022) -
Learning in deep neural networks and brains with similarity-weighted interleaved learning
por: Saxena, Rajat, et al.
Publicado: (2022)