Cargando…
On Scalable Deep Learning and Parallelizing Gradient Descent
Speeding up gradient based methods has been a subject of interest over the past years with many practical applications, especially with respect to Deep Learning. Despite the fact that many optimizations have been done on a hardware level, the convergence rate of very large models remains problematic...
Autor principal: | Hermans, Joeri |
---|---|
Lenguaje: | eng |
Publicado: |
2017
|
Materias: | |
Acceso en línea: | http://cds.cern.ch/record/2276711 |
Ejemplares similares
-
Advanced computer architecture: parallelism, scalability, programmability
por: Hwang, Kai
Publicado: (1993) -
Scalable Unsupervised Learning for Deep Discrete Generative Models
por: Guiraud, Enrico
Publicado: (2021) -
Practical machine learning with H2O: powerful, scalable techniques for deep learning and AI
por: Cook, Darren
Publicado: (2017) -
Advanced deep learning with Keras: apply deep learning techniques, autoencoders, GANs, variational autoencoders, deep reinforcement learning, policy gradients, and more
por: Atienza, Rowel
Publicado: (2018) -
Learning Apache Cassandra: managing fault-tolerant and scalable data
por: Yarabarla, Sandeep
Publicado: (2017)