Cargando…
A learnable parallel processing architecture towards unity of memory and computing
Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when comp...
Autores principales: | , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group
2015
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4536493/ https://www.ncbi.nlm.nih.gov/pubmed/26271243 http://dx.doi.org/10.1038/srep13330 |
_version_ | 1782385749767749632 |
---|---|
author | Li, H. Gao, B. Chen, Z. Zhao, Y. Huang, P. Ye, H. Liu, L. Liu, X. Kang, J. |
author_facet | Li, H. Gao, B. Chen, Z. Zhao, Y. Huang, P. Ye, H. Liu, L. Liu, X. Kang, J. |
author_sort | Li, H. |
collection | PubMed |
description | Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named “iMemComp”, where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped “iMemComp” with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on “iMemComp” can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area. |
format | Online Article Text |
id | pubmed-4536493 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2015 |
publisher | Nature Publishing Group |
record_format | MEDLINE/PubMed |
spelling | pubmed-45364932015-09-04 A learnable parallel processing architecture towards unity of memory and computing Li, H. Gao, B. Chen, Z. Zhao, Y. Huang, P. Ye, H. Liu, L. Liu, X. Kang, J. Sci Rep Article Developing energy-efficient parallel information processing systems beyond von Neumann architecture is a long-standing goal of modern information technologies. The widely used von Neumann computer architecture separates memory and computing units, which leads to energy-hungry data movement when computers work. In order to meet the need of efficient information processing for the data-driven applications such as big data and Internet of Things, an energy-efficient processing architecture beyond von Neumann is critical for the information society. Here we show a non-von Neumann architecture built of resistive switching (RS) devices named “iMemComp”, where memory and logic are unified with single-type devices. Leveraging nonvolatile nature and structural parallelism of crossbar RS arrays, we have equipped “iMemComp” with capabilities of computing in parallel and learning user-defined logic functions for large-scale information processing tasks. Such architecture eliminates the energy-hungry data movement in von Neumann computers. Compared with contemporary silicon technology, adder circuits based on “iMemComp” can improve the speed by 76.8% and the power dissipation by 60.3%, together with a 700 times aggressive reduction in the circuit area. Nature Publishing Group 2015-08-14 /pmc/articles/PMC4536493/ /pubmed/26271243 http://dx.doi.org/10.1038/srep13330 Text en Copyright © 2015, Macmillan Publishers Limited http://creativecommons.org/licenses/by/4.0/ This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ |
spellingShingle | Article Li, H. Gao, B. Chen, Z. Zhao, Y. Huang, P. Ye, H. Liu, L. Liu, X. Kang, J. A learnable parallel processing architecture towards unity of memory and computing |
title | A learnable parallel processing architecture towards unity of memory and computing |
title_full | A learnable parallel processing architecture towards unity of memory and computing |
title_fullStr | A learnable parallel processing architecture towards unity of memory and computing |
title_full_unstemmed | A learnable parallel processing architecture towards unity of memory and computing |
title_short | A learnable parallel processing architecture towards unity of memory and computing |
title_sort | learnable parallel processing architecture towards unity of memory and computing |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4536493/ https://www.ncbi.nlm.nih.gov/pubmed/26271243 http://dx.doi.org/10.1038/srep13330 |
work_keys_str_mv | AT lih alearnableparallelprocessingarchitecturetowardsunityofmemoryandcomputing AT gaob alearnableparallelprocessingarchitecturetowardsunityofmemoryandcomputing AT chenz alearnableparallelprocessingarchitecturetowardsunityofmemoryandcomputing AT zhaoy alearnableparallelprocessingarchitecturetowardsunityofmemoryandcomputing AT huangp alearnableparallelprocessingarchitecturetowardsunityofmemoryandcomputing AT yeh alearnableparallelprocessingarchitecturetowardsunityofmemoryandcomputing AT liul alearnableparallelprocessingarchitecturetowardsunityofmemoryandcomputing AT liux alearnableparallelprocessingarchitecturetowardsunityofmemoryandcomputing AT kangj alearnableparallelprocessingarchitecturetowardsunityofmemoryandcomputing AT lih learnableparallelprocessingarchitecturetowardsunityofmemoryandcomputing AT gaob learnableparallelprocessingarchitecturetowardsunityofmemoryandcomputing AT chenz learnableparallelprocessingarchitecturetowardsunityofmemoryandcomputing AT zhaoy learnableparallelprocessingarchitecturetowardsunityofmemoryandcomputing AT huangp learnableparallelprocessingarchitecturetowardsunityofmemoryandcomputing AT yeh learnableparallelprocessingarchitecturetowardsunityofmemoryandcomputing AT liul learnableparallelprocessingarchitecturetowardsunityofmemoryandcomputing AT liux learnableparallelprocessingarchitecturetowardsunityofmemoryandcomputing AT kangj learnableparallelprocessingarchitecturetowardsunityofmemoryandcomputing |