Cargando…

How Amdahl’s Law limits the performance of large artificial neural networks: why the functionality of full-scale brain simulation on processor-based simulators is limited

With both knowing more and more details about how neurons and complex neural networks work and having serious demand for making performable huge artificial networks, more and more efforts are devoted to build both hardware and/or software simulators and supercomputers targeting artificial intelligen...

Descripción completa

Detalles Bibliográficos
Autor principal: Végh, János
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Berlin Heidelberg 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6458202/
https://www.ncbi.nlm.nih.gov/pubmed/30972504
http://dx.doi.org/10.1186/s40708-019-0097-2
Descripción
Sumario:With both knowing more and more details about how neurons and complex neural networks work and having serious demand for making performable huge artificial networks, more and more efforts are devoted to build both hardware and/or software simulators and supercomputers targeting artificial intelligence applications, demanding an exponentially increasing amount of computing capacity. However, the inherently parallel operation of the neural networks is mostly simulated deploying inherently sequential (or in the best case: sequential–parallel) computing elements. The paper shows that neural network simulators, (both software and hardware ones), akin to all other sequential–parallel computing systems, have computing performance limitation due to deploying clock-driven electronic circuits, the 70-year old computing paradigm and Amdahl’s Law about parallelized computing systems. The findings explain the limitations/saturation experienced in former studies.