Cargando…

A Novel Memory-Scheduling Strategy for Large Convolutional Neural Network on Memory-Limited Devices

Recently, machine learning, especially deep learning, has been a core algorithm to be widely used in many fields such as natural language processing, speech recognition, object recognition, and so on. At the same time, another trend is that more and more applications are moved to wearable and mobile...

Descripción completa

Detalles Bibliográficos
Autores principales: Li, Shijie, Shen, Xiaolong, Dou, Yong, Ni, Shice, Xu, Jinwei, Yang, Ke, Wang, Qiang, Niu, Xin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Hindawi 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6512078/
https://www.ncbi.nlm.nih.gov/pubmed/31182958
http://dx.doi.org/10.1155/2019/4328653
_version_ 1783417644739723264
author Li, Shijie
Shen, Xiaolong
Dou, Yong
Ni, Shice
Xu, Jinwei
Yang, Ke
Wang, Qiang
Niu, Xin
author_facet Li, Shijie
Shen, Xiaolong
Dou, Yong
Ni, Shice
Xu, Jinwei
Yang, Ke
Wang, Qiang
Niu, Xin
author_sort Li, Shijie
collection PubMed
description Recently, machine learning, especially deep learning, has been a core algorithm to be widely used in many fields such as natural language processing, speech recognition, object recognition, and so on. At the same time, another trend is that more and more applications are moved to wearable and mobile devices. However, traditional deep learning methods such as convolutional neural network (CNN) and its variants consume a lot of memory resources. In this case, these powerful deep learning methods are difficult to apply on mobile memory-limited platforms. In order to solve this problem, we present a novel memory-management strategy called mmCNN in this paper. With the help of this method, we can easily deploy a trained large-size CNN on any memory size platform such as GPU, FPGA, or memory-limited mobile devices. In our experiments, we run a feed-forward CNN process in some extremely small memory sizes (as low as 5 MB) on a GPU platform. The result shows that our method saves more than 98% memory compared to a traditional CNN algorithm and further saves more than 90% compared to the state-of-the-art related work “vDNNs” (virtualized deep neural networks). Our work in this paper improves the computing scalability of lightweight applications and breaks the memory bottleneck of using deep learning method on memory-limited devices.
format Online
Article
Text
id pubmed-6512078
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher Hindawi
record_format MEDLINE/PubMed
spelling pubmed-65120782019-06-10 A Novel Memory-Scheduling Strategy for Large Convolutional Neural Network on Memory-Limited Devices Li, Shijie Shen, Xiaolong Dou, Yong Ni, Shice Xu, Jinwei Yang, Ke Wang, Qiang Niu, Xin Comput Intell Neurosci Research Article Recently, machine learning, especially deep learning, has been a core algorithm to be widely used in many fields such as natural language processing, speech recognition, object recognition, and so on. At the same time, another trend is that more and more applications are moved to wearable and mobile devices. However, traditional deep learning methods such as convolutional neural network (CNN) and its variants consume a lot of memory resources. In this case, these powerful deep learning methods are difficult to apply on mobile memory-limited platforms. In order to solve this problem, we present a novel memory-management strategy called mmCNN in this paper. With the help of this method, we can easily deploy a trained large-size CNN on any memory size platform such as GPU, FPGA, or memory-limited mobile devices. In our experiments, we run a feed-forward CNN process in some extremely small memory sizes (as low as 5 MB) on a GPU platform. The result shows that our method saves more than 98% memory compared to a traditional CNN algorithm and further saves more than 90% compared to the state-of-the-art related work “vDNNs” (virtualized deep neural networks). Our work in this paper improves the computing scalability of lightweight applications and breaks the memory bottleneck of using deep learning method on memory-limited devices. Hindawi 2019-04-28 /pmc/articles/PMC6512078/ /pubmed/31182958 http://dx.doi.org/10.1155/2019/4328653 Text en Copyright © 2019 Shijie Li et al. http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Research Article
Li, Shijie
Shen, Xiaolong
Dou, Yong
Ni, Shice
Xu, Jinwei
Yang, Ke
Wang, Qiang
Niu, Xin
A Novel Memory-Scheduling Strategy for Large Convolutional Neural Network on Memory-Limited Devices
title A Novel Memory-Scheduling Strategy for Large Convolutional Neural Network on Memory-Limited Devices
title_full A Novel Memory-Scheduling Strategy for Large Convolutional Neural Network on Memory-Limited Devices
title_fullStr A Novel Memory-Scheduling Strategy for Large Convolutional Neural Network on Memory-Limited Devices
title_full_unstemmed A Novel Memory-Scheduling Strategy for Large Convolutional Neural Network on Memory-Limited Devices
title_short A Novel Memory-Scheduling Strategy for Large Convolutional Neural Network on Memory-Limited Devices
title_sort novel memory-scheduling strategy for large convolutional neural network on memory-limited devices
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6512078/
https://www.ncbi.nlm.nih.gov/pubmed/31182958
http://dx.doi.org/10.1155/2019/4328653
work_keys_str_mv AT lishijie anovelmemoryschedulingstrategyforlargeconvolutionalneuralnetworkonmemorylimiteddevices
AT shenxiaolong anovelmemoryschedulingstrategyforlargeconvolutionalneuralnetworkonmemorylimiteddevices
AT douyong anovelmemoryschedulingstrategyforlargeconvolutionalneuralnetworkonmemorylimiteddevices
AT nishice anovelmemoryschedulingstrategyforlargeconvolutionalneuralnetworkonmemorylimiteddevices
AT xujinwei anovelmemoryschedulingstrategyforlargeconvolutionalneuralnetworkonmemorylimiteddevices
AT yangke anovelmemoryschedulingstrategyforlargeconvolutionalneuralnetworkonmemorylimiteddevices
AT wangqiang anovelmemoryschedulingstrategyforlargeconvolutionalneuralnetworkonmemorylimiteddevices
AT niuxin anovelmemoryschedulingstrategyforlargeconvolutionalneuralnetworkonmemorylimiteddevices
AT lishijie novelmemoryschedulingstrategyforlargeconvolutionalneuralnetworkonmemorylimiteddevices
AT shenxiaolong novelmemoryschedulingstrategyforlargeconvolutionalneuralnetworkonmemorylimiteddevices
AT douyong novelmemoryschedulingstrategyforlargeconvolutionalneuralnetworkonmemorylimiteddevices
AT nishice novelmemoryschedulingstrategyforlargeconvolutionalneuralnetworkonmemorylimiteddevices
AT xujinwei novelmemoryschedulingstrategyforlargeconvolutionalneuralnetworkonmemorylimiteddevices
AT yangke novelmemoryschedulingstrategyforlargeconvolutionalneuralnetworkonmemorylimiteddevices
AT wangqiang novelmemoryschedulingstrategyforlargeconvolutionalneuralnetworkonmemorylimiteddevices
AT niuxin novelmemoryschedulingstrategyforlargeconvolutionalneuralnetworkonmemorylimiteddevices