Cargando…

Machine Learning on Mainstream Microcontrollers †

This paper presents the Edge Learning Machine (ELM), a machine learning framework for edge devices, which manages the training phase on a desktop computer and performs inferences on microcontrollers. The framework implements, in a platform-independent C language, three supervised machine learning al...

Descripción completa

Detalles Bibliográficos
Autores principales: Sakr, Fouad, Bellotti, Francesco, Berta, Riccardo, De Gloria, Alessandro
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7249132/
https://www.ncbi.nlm.nih.gov/pubmed/32380766
http://dx.doi.org/10.3390/s20092638
_version_ 1783538533295718400
author Sakr, Fouad
Bellotti, Francesco
Berta, Riccardo
De Gloria, Alessandro
author_facet Sakr, Fouad
Bellotti, Francesco
Berta, Riccardo
De Gloria, Alessandro
author_sort Sakr, Fouad
collection PubMed
description This paper presents the Edge Learning Machine (ELM), a machine learning framework for edge devices, which manages the training phase on a desktop computer and performs inferences on microcontrollers. The framework implements, in a platform-independent C language, three supervised machine learning algorithms (Support Vector Machine (SVM) with a linear kernel, k-Nearest Neighbors (K-NN), and Decision Tree (DT)), and exploits STM X-Cube-AI to implement Artificial Neural Networks (ANNs) on STM32 Nucleo boards. We investigated the performance of these algorithms on six embedded boards and six datasets (four classifications and two regression). Our analysis—which aims to plug a gap in the literature—shows that the target platforms allow us to achieve the same performance score as a desktop machine, with a similar time latency. ANN performs better than the other algorithms in most cases, with no difference among the target devices. We observed that increasing the depth of an NN improves performance, up to a saturation level. k-NN performs similarly to ANN and, in one case, even better, but requires all the training sets to be kept in the inference phase, posing a significant memory demand, which can be afforded only by high-end edge devices. DT performance has a larger variance across datasets. In general, several factors impact performance in different ways across datasets. This highlights the importance of a framework like ELM, which is able to train and compare different algorithms. To support the developer community, ELM is released on an open-source basis.
format Online
Article
Text
id pubmed-7249132
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-72491322020-06-10 Machine Learning on Mainstream Microcontrollers † Sakr, Fouad Bellotti, Francesco Berta, Riccardo De Gloria, Alessandro Sensors (Basel) Article This paper presents the Edge Learning Machine (ELM), a machine learning framework for edge devices, which manages the training phase on a desktop computer and performs inferences on microcontrollers. The framework implements, in a platform-independent C language, three supervised machine learning algorithms (Support Vector Machine (SVM) with a linear kernel, k-Nearest Neighbors (K-NN), and Decision Tree (DT)), and exploits STM X-Cube-AI to implement Artificial Neural Networks (ANNs) on STM32 Nucleo boards. We investigated the performance of these algorithms on six embedded boards and six datasets (four classifications and two regression). Our analysis—which aims to plug a gap in the literature—shows that the target platforms allow us to achieve the same performance score as a desktop machine, with a similar time latency. ANN performs better than the other algorithms in most cases, with no difference among the target devices. We observed that increasing the depth of an NN improves performance, up to a saturation level. k-NN performs similarly to ANN and, in one case, even better, but requires all the training sets to be kept in the inference phase, posing a significant memory demand, which can be afforded only by high-end edge devices. DT performance has a larger variance across datasets. In general, several factors impact performance in different ways across datasets. This highlights the importance of a framework like ELM, which is able to train and compare different algorithms. To support the developer community, ELM is released on an open-source basis. MDPI 2020-05-05 /pmc/articles/PMC7249132/ /pubmed/32380766 http://dx.doi.org/10.3390/s20092638 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Sakr, Fouad
Bellotti, Francesco
Berta, Riccardo
De Gloria, Alessandro
Machine Learning on Mainstream Microcontrollers †
title Machine Learning on Mainstream Microcontrollers †
title_full Machine Learning on Mainstream Microcontrollers †
title_fullStr Machine Learning on Mainstream Microcontrollers †
title_full_unstemmed Machine Learning on Mainstream Microcontrollers †
title_short Machine Learning on Mainstream Microcontrollers †
title_sort machine learning on mainstream microcontrollers †
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7249132/
https://www.ncbi.nlm.nih.gov/pubmed/32380766
http://dx.doi.org/10.3390/s20092638
work_keys_str_mv AT sakrfouad machinelearningonmainstreammicrocontrollers
AT bellottifrancesco machinelearningonmainstreammicrocontrollers
AT bertariccardo machinelearningonmainstreammicrocontrollers
AT degloriaalessandro machinelearningonmainstreammicrocontrollers