Cargando…

On Scalable Deep Learning and Parallelizing Gradient Descent

Speeding up gradient based methods has been a subject of interest over the past years with many practical applications, especially with respect to Deep Learning. Despite the fact that many optimizations have been done on a hardware level, the convergence rate of very large models remains problematic...

Descripción completa

Detalles Bibliográficos
Autor principal: Hermans, Joeri
Lenguaje:eng
Publicado: 2017
Materias:
Acceso en línea:http://cds.cern.ch/record/2276711