Cargando…

Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and Its Application to Sparse Coding

The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog,...

Descripción completa

Detalles Bibliográficos
Autores principales: Agarwal, Sapan, Quach, Tu-Thach, Parekh, Ojas, Hsia, Alexander H., DeBenedictis, Erik P., James, Conrad D., Marinella, Matthew J., Aimone, James B.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2016
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4701906/
https://www.ncbi.nlm.nih.gov/pubmed/26778946
http://dx.doi.org/10.3389/fnins.2015.00484
_version_ 1782408552530313216
author Agarwal, Sapan
Quach, Tu-Thach
Parekh, Ojas
Hsia, Alexander H.
DeBenedictis, Erik P.
James, Conrad D.
Marinella, Matthew J.
Aimone, James B.
author_facet Agarwal, Sapan
Quach, Tu-Thach
Parekh, Ojas
Hsia, Alexander H.
DeBenedictis, Erik P.
James, Conrad D.
Marinella, Matthew J.
Aimone, James B.
author_sort Agarwal, Sapan
collection PubMed
description The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.
format Online
Article
Text
id pubmed-4701906
institution National Center for Biotechnology Information
language English
publishDate 2016
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-47019062016-01-15 Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and Its Application to Sparse Coding Agarwal, Sapan Quach, Tu-Thach Parekh, Ojas Hsia, Alexander H. DeBenedictis, Erik P. James, Conrad D. Marinella, Matthew J. Aimone, James B. Front Neurosci Neuroscience The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning. Frontiers Media S.A. 2016-01-06 /pmc/articles/PMC4701906/ /pubmed/26778946 http://dx.doi.org/10.3389/fnins.2015.00484 Text en Copyright © 2016 Agarwal, Quach, Parekh, Hsia, DeBenedictis, James, Marinella and Aimone. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Agarwal, Sapan
Quach, Tu-Thach
Parekh, Ojas
Hsia, Alexander H.
DeBenedictis, Erik P.
James, Conrad D.
Marinella, Matthew J.
Aimone, James B.
Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and Its Application to Sparse Coding
title Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and Its Application to Sparse Coding
title_full Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and Its Application to Sparse Coding
title_fullStr Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and Its Application to Sparse Coding
title_full_unstemmed Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and Its Application to Sparse Coding
title_short Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and Its Application to Sparse Coding
title_sort energy scaling advantages of resistive memory crossbar based computation and its application to sparse coding
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4701906/
https://www.ncbi.nlm.nih.gov/pubmed/26778946
http://dx.doi.org/10.3389/fnins.2015.00484
work_keys_str_mv AT agarwalsapan energyscalingadvantagesofresistivememorycrossbarbasedcomputationanditsapplicationtosparsecoding
AT quachtuthach energyscalingadvantagesofresistivememorycrossbarbasedcomputationanditsapplicationtosparsecoding
AT parekhojas energyscalingadvantagesofresistivememorycrossbarbasedcomputationanditsapplicationtosparsecoding
AT hsiaalexanderh energyscalingadvantagesofresistivememorycrossbarbasedcomputationanditsapplicationtosparsecoding
AT debenedictiserikp energyscalingadvantagesofresistivememorycrossbarbasedcomputationanditsapplicationtosparsecoding
AT jamesconradd energyscalingadvantagesofresistivememorycrossbarbasedcomputationanditsapplicationtosparsecoding
AT marinellamatthewj energyscalingadvantagesofresistivememorycrossbarbasedcomputationanditsapplicationtosparsecoding
AT aimonejamesb energyscalingadvantagesofresistivememorycrossbarbasedcomputationanditsapplicationtosparsecoding