Cargando…

Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the ‘Extreme Learning Machine’ Algorithm

Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates...

Descripción completa

Detalles Bibliográficos
Autores principales: McDonnell, Mark D., Tissera, Migel D., Vladusich, Tony, van Schaik, André, Tapson, Jonathan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2015
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4532447/
https://www.ncbi.nlm.nih.gov/pubmed/26262687
http://dx.doi.org/10.1371/journal.pone.0134254
_version_ 1782385219477700608
author McDonnell, Mark D.
Tissera, Migel D.
Vladusich, Tony
van Schaik, André
Tapson, Jonathan
author_facet McDonnell, Mark D.
Tissera, Migel D.
Vladusich, Tony
van Schaik, André
Tapson, Jonathan
author_sort McDonnell, Mark D.
collection PubMed
description Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the ‘Extreme Learning Machine’ (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random ‘receptive field’ sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.
format Online
Article
Text
id pubmed-4532447
institution National Center for Biotechnology Information
language English
publishDate 2015
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-45324472015-08-20 Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the ‘Extreme Learning Machine’ Algorithm McDonnell, Mark D. Tissera, Migel D. Vladusich, Tony van Schaik, André Tapson, Jonathan PLoS One Research Article Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the ‘Extreme Learning Machine’ (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random ‘receptive field’ sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems. Public Library of Science 2015-08-11 /pmc/articles/PMC4532447/ /pubmed/26262687 http://dx.doi.org/10.1371/journal.pone.0134254 Text en © 2015 McDonnell et al http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited.
spellingShingle Research Article
McDonnell, Mark D.
Tissera, Migel D.
Vladusich, Tony
van Schaik, André
Tapson, Jonathan
Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the ‘Extreme Learning Machine’ Algorithm
title Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the ‘Extreme Learning Machine’ Algorithm
title_full Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the ‘Extreme Learning Machine’ Algorithm
title_fullStr Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the ‘Extreme Learning Machine’ Algorithm
title_full_unstemmed Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the ‘Extreme Learning Machine’ Algorithm
title_short Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the ‘Extreme Learning Machine’ Algorithm
title_sort fast, simple and accurate handwritten digit classification by training shallow neural network classifiers with the ‘extreme learning machine’ algorithm
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4532447/
https://www.ncbi.nlm.nih.gov/pubmed/26262687
http://dx.doi.org/10.1371/journal.pone.0134254
work_keys_str_mv AT mcdonnellmarkd fastsimpleandaccuratehandwrittendigitclassificationbytrainingshallowneuralnetworkclassifierswiththeextremelearningmachinealgorithm
AT tisseramigeld fastsimpleandaccuratehandwrittendigitclassificationbytrainingshallowneuralnetworkclassifierswiththeextremelearningmachinealgorithm
AT vladusichtony fastsimpleandaccuratehandwrittendigitclassificationbytrainingshallowneuralnetworkclassifierswiththeextremelearningmachinealgorithm
AT vanschaikandre fastsimpleandaccuratehandwrittendigitclassificationbytrainingshallowneuralnetworkclassifierswiththeextremelearningmachinealgorithm
AT tapsonjonathan fastsimpleandaccuratehandwrittendigitclassificationbytrainingshallowneuralnetworkclassifierswiththeextremelearningmachinealgorithm