Cargando…
Graph-Powered Interpretable Machine Learning Models for Abnormality Detection in Ego-Things Network
In recent days, it is becoming essential to ensure that the outcomes of signal processing methods based on machine learning (ML) data-driven models can provide interpretable predictions. The interpretability of ML models can be defined as the capability to understand the reasons that contributed to...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8953755/ https://www.ncbi.nlm.nih.gov/pubmed/35336431 http://dx.doi.org/10.3390/s22062260 |
_version_ | 1784675928345935872 |
---|---|
author | Thekke Kanapram, Divya Marcenaro, Lucio Martin Gomez, David Regazzoni, Carlo |
author_facet | Thekke Kanapram, Divya Marcenaro, Lucio Martin Gomez, David Regazzoni, Carlo |
author_sort | Thekke Kanapram, Divya |
collection | PubMed |
description | In recent days, it is becoming essential to ensure that the outcomes of signal processing methods based on machine learning (ML) data-driven models can provide interpretable predictions. The interpretability of ML models can be defined as the capability to understand the reasons that contributed to generating a given outcome in a complex autonomous or semi-autonomous system. The necessity of interpretability is often related to the evaluation of performances in complex systems and the acceptance of agents’ automatization processes where critical high-risk decisions have to be taken. This paper concentrates on one of the core functionality of such systems, i.e., abnormality detection, and on choosing a model representation modality based on a data-driven machine learning (ML) technique such that the outcomes become interpretable. The interpretability in this work is achieved through graph matching of semantic level vocabulary generated from the data and their relationships. The proposed approach assumes that the data-driven models to be chosen should support emergent self-awareness (SA) of the agents at multiple abstraction levels. It is demonstrated that the capability of incrementally updating learned representation models based on progressive experiences of the agent is shown to be strictly related to interpretability capability. As a case study, abnormality detection is analyzed as a primary feature of the collective awareness (CA) of a network of vehicles performing cooperative behaviors. Each vehicle is considered an example of an Internet of Things (IoT) node, therefore providing results that can be generalized to an IoT framework where agents have different sensors, actuators, and tasks to be accomplished. The capability of a model to allow evaluation of abnormalities at different levels of abstraction in the learned models is addressed as a key aspect for interpretability. |
format | Online Article Text |
id | pubmed-8953755 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-89537552022-03-26 Graph-Powered Interpretable Machine Learning Models for Abnormality Detection in Ego-Things Network Thekke Kanapram, Divya Marcenaro, Lucio Martin Gomez, David Regazzoni, Carlo Sensors (Basel) Article In recent days, it is becoming essential to ensure that the outcomes of signal processing methods based on machine learning (ML) data-driven models can provide interpretable predictions. The interpretability of ML models can be defined as the capability to understand the reasons that contributed to generating a given outcome in a complex autonomous or semi-autonomous system. The necessity of interpretability is often related to the evaluation of performances in complex systems and the acceptance of agents’ automatization processes where critical high-risk decisions have to be taken. This paper concentrates on one of the core functionality of such systems, i.e., abnormality detection, and on choosing a model representation modality based on a data-driven machine learning (ML) technique such that the outcomes become interpretable. The interpretability in this work is achieved through graph matching of semantic level vocabulary generated from the data and their relationships. The proposed approach assumes that the data-driven models to be chosen should support emergent self-awareness (SA) of the agents at multiple abstraction levels. It is demonstrated that the capability of incrementally updating learned representation models based on progressive experiences of the agent is shown to be strictly related to interpretability capability. As a case study, abnormality detection is analyzed as a primary feature of the collective awareness (CA) of a network of vehicles performing cooperative behaviors. Each vehicle is considered an example of an Internet of Things (IoT) node, therefore providing results that can be generalized to an IoT framework where agents have different sensors, actuators, and tasks to be accomplished. The capability of a model to allow evaluation of abnormalities at different levels of abstraction in the learned models is addressed as a key aspect for interpretability. MDPI 2022-03-15 /pmc/articles/PMC8953755/ /pubmed/35336431 http://dx.doi.org/10.3390/s22062260 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Thekke Kanapram, Divya Marcenaro, Lucio Martin Gomez, David Regazzoni, Carlo Graph-Powered Interpretable Machine Learning Models for Abnormality Detection in Ego-Things Network |
title | Graph-Powered Interpretable Machine Learning Models for Abnormality Detection in Ego-Things Network |
title_full | Graph-Powered Interpretable Machine Learning Models for Abnormality Detection in Ego-Things Network |
title_fullStr | Graph-Powered Interpretable Machine Learning Models for Abnormality Detection in Ego-Things Network |
title_full_unstemmed | Graph-Powered Interpretable Machine Learning Models for Abnormality Detection in Ego-Things Network |
title_short | Graph-Powered Interpretable Machine Learning Models for Abnormality Detection in Ego-Things Network |
title_sort | graph-powered interpretable machine learning models for abnormality detection in ego-things network |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8953755/ https://www.ncbi.nlm.nih.gov/pubmed/35336431 http://dx.doi.org/10.3390/s22062260 |
work_keys_str_mv | AT thekkekanapramdivya graphpoweredinterpretablemachinelearningmodelsforabnormalitydetectioninegothingsnetwork AT marcenarolucio graphpoweredinterpretablemachinelearningmodelsforabnormalitydetectioninegothingsnetwork AT martingomezdavid graphpoweredinterpretablemachinelearningmodelsforabnormalitydetectioninegothingsnetwork AT regazzonicarlo graphpoweredinterpretablemachinelearningmodelsforabnormalitydetectioninegothingsnetwork |