Cargando…

A Systematic Approach for Evaluating Artificial Intelligence Models in Industrial Settings

Artificial Intelligence (AI) is one of the hottest topics in our society, especially when it comes to solving data-analysis problems. Industry are conducting their digital shifts, and AI is becoming a cornerstone technology for making decisions out of the huge amount of (sensors-based) data availabl...

Descripción completa

Detalles Bibliográficos
Autores principales: Benedick, Paul-Lou, Robert, Jérémy, Le Traon, Yves
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8469892/
https://www.ncbi.nlm.nih.gov/pubmed/34577398
http://dx.doi.org/10.3390/s21186195
_version_ 1784574056276688896
author Benedick, Paul-Lou
Robert, Jérémy
Le Traon, Yves
author_facet Benedick, Paul-Lou
Robert, Jérémy
Le Traon, Yves
author_sort Benedick, Paul-Lou
collection PubMed
description Artificial Intelligence (AI) is one of the hottest topics in our society, especially when it comes to solving data-analysis problems. Industry are conducting their digital shifts, and AI is becoming a cornerstone technology for making decisions out of the huge amount of (sensors-based) data available in the production floor. However, such technology may be disappointing when deployed in real conditions. Despite good theoretical performances and high accuracy when trained and tested in isolation, a Machine-Learning (M-L) model may provide degraded performances in real conditions. One reason may be fragility in treating properly unexpected or perturbed data. The objective of the paper is therefore to study the robustness of seven M-L and Deep-Learning (D-L) algorithms, when classifying univariate time-series under perturbations. A systematic approach is proposed for artificially injecting perturbations in the data and for evaluating the robustness of the models. This approach focuses on two perturbations that are likely to happen during data collection. Our experimental study, conducted on twenty sensors’ datasets from the public University of California Riverside (UCR) repository, shows a great disparity of the models’ robustness under data quality degradation. Those results are used to analyse whether the impact of such robustness can be predictable—thanks to decision trees—which would prevent us from testing all perturbations scenarios. Our study shows that building such a predictor is not straightforward and suggests that such a systematic approach needs to be used for evaluating AI models’ robustness.
format Online
Article
Text
id pubmed-8469892
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-84698922021-09-27 A Systematic Approach for Evaluating Artificial Intelligence Models in Industrial Settings Benedick, Paul-Lou Robert, Jérémy Le Traon, Yves Sensors (Basel) Article Artificial Intelligence (AI) is one of the hottest topics in our society, especially when it comes to solving data-analysis problems. Industry are conducting their digital shifts, and AI is becoming a cornerstone technology for making decisions out of the huge amount of (sensors-based) data available in the production floor. However, such technology may be disappointing when deployed in real conditions. Despite good theoretical performances and high accuracy when trained and tested in isolation, a Machine-Learning (M-L) model may provide degraded performances in real conditions. One reason may be fragility in treating properly unexpected or perturbed data. The objective of the paper is therefore to study the robustness of seven M-L and Deep-Learning (D-L) algorithms, when classifying univariate time-series under perturbations. A systematic approach is proposed for artificially injecting perturbations in the data and for evaluating the robustness of the models. This approach focuses on two perturbations that are likely to happen during data collection. Our experimental study, conducted on twenty sensors’ datasets from the public University of California Riverside (UCR) repository, shows a great disparity of the models’ robustness under data quality degradation. Those results are used to analyse whether the impact of such robustness can be predictable—thanks to decision trees—which would prevent us from testing all perturbations scenarios. Our study shows that building such a predictor is not straightforward and suggests that such a systematic approach needs to be used for evaluating AI models’ robustness. MDPI 2021-09-15 /pmc/articles/PMC8469892/ /pubmed/34577398 http://dx.doi.org/10.3390/s21186195 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Benedick, Paul-Lou
Robert, Jérémy
Le Traon, Yves
A Systematic Approach for Evaluating Artificial Intelligence Models in Industrial Settings
title A Systematic Approach for Evaluating Artificial Intelligence Models in Industrial Settings
title_full A Systematic Approach for Evaluating Artificial Intelligence Models in Industrial Settings
title_fullStr A Systematic Approach for Evaluating Artificial Intelligence Models in Industrial Settings
title_full_unstemmed A Systematic Approach for Evaluating Artificial Intelligence Models in Industrial Settings
title_short A Systematic Approach for Evaluating Artificial Intelligence Models in Industrial Settings
title_sort systematic approach for evaluating artificial intelligence models in industrial settings
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8469892/
https://www.ncbi.nlm.nih.gov/pubmed/34577398
http://dx.doi.org/10.3390/s21186195
work_keys_str_mv AT benedickpaullou asystematicapproachforevaluatingartificialintelligencemodelsinindustrialsettings
AT robertjeremy asystematicapproachforevaluatingartificialintelligencemodelsinindustrialsettings
AT letraonyves asystematicapproachforevaluatingartificialintelligencemodelsinindustrialsettings
AT benedickpaullou systematicapproachforevaluatingartificialintelligencemodelsinindustrialsettings
AT robertjeremy systematicapproachforevaluatingartificialintelligencemodelsinindustrialsettings
AT letraonyves systematicapproachforevaluatingartificialintelligencemodelsinindustrialsettings