Cargando…

Explainability and Transparency of Classifiers for Air-Handling Unit Faults Using Explainable Artificial Intelligence (XAI)

In recent years, explainable artificial intelligence (XAI) techniques have been developed to improve the explainability, trust and transparency of machine learning models. This work presents a method that explains the outputs of an air-handling unit (AHU) faults classifier using a modified XAI techn...

Descripción completa

Detalles Bibliográficos
Autores principales: Meas, Molika, Machlev, Ram, Kose, Ahmet, Tepljakov, Aleksei, Loo, Lauri, Levron, Yoash, Petlenkov, Eduard, Belikov, Juri
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9460735/
https://www.ncbi.nlm.nih.gov/pubmed/36080795
http://dx.doi.org/10.3390/s22176338
_version_ 1784786820237623296
author Meas, Molika
Machlev, Ram
Kose, Ahmet
Tepljakov, Aleksei
Loo, Lauri
Levron, Yoash
Petlenkov, Eduard
Belikov, Juri
author_facet Meas, Molika
Machlev, Ram
Kose, Ahmet
Tepljakov, Aleksei
Loo, Lauri
Levron, Yoash
Petlenkov, Eduard
Belikov, Juri
author_sort Meas, Molika
collection PubMed
description In recent years, explainable artificial intelligence (XAI) techniques have been developed to improve the explainability, trust and transparency of machine learning models. This work presents a method that explains the outputs of an air-handling unit (AHU) faults classifier using a modified XAI technique, such that non-AI expert end-users who require justification for the diagnosis output can easily understand the reasoning behind the decision. The method operates as follows: First, an XGBoost algorithm is used to detect and classify potential faults in the heating and cooling coil valves, sensors, and the heat recovery of an air-handling unit. Second, an XAI-based SHAP technique is used to provide explanations, with a focus on the end-users, who are HVAC engineers. Then, relevant features are chosen based on user-selected feature sets and features with high attribution scores. Finally, a sliding window system is used to visualize the short history of these relevant features and provide explanations for the diagnosed faults in the observed time period. This study aimed to provide information not only about what occurs at the time of fault appearance, but also about how the fault occurred. Finally, the resulting explanations are evaluated by seven HVAC expert engineers. The proposed approach is validated using real data collected from a shopping mall.
format Online
Article
Text
id pubmed-9460735
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-94607352022-09-10 Explainability and Transparency of Classifiers for Air-Handling Unit Faults Using Explainable Artificial Intelligence (XAI) Meas, Molika Machlev, Ram Kose, Ahmet Tepljakov, Aleksei Loo, Lauri Levron, Yoash Petlenkov, Eduard Belikov, Juri Sensors (Basel) Article In recent years, explainable artificial intelligence (XAI) techniques have been developed to improve the explainability, trust and transparency of machine learning models. This work presents a method that explains the outputs of an air-handling unit (AHU) faults classifier using a modified XAI technique, such that non-AI expert end-users who require justification for the diagnosis output can easily understand the reasoning behind the decision. The method operates as follows: First, an XGBoost algorithm is used to detect and classify potential faults in the heating and cooling coil valves, sensors, and the heat recovery of an air-handling unit. Second, an XAI-based SHAP technique is used to provide explanations, with a focus on the end-users, who are HVAC engineers. Then, relevant features are chosen based on user-selected feature sets and features with high attribution scores. Finally, a sliding window system is used to visualize the short history of these relevant features and provide explanations for the diagnosed faults in the observed time period. This study aimed to provide information not only about what occurs at the time of fault appearance, but also about how the fault occurred. Finally, the resulting explanations are evaluated by seven HVAC expert engineers. The proposed approach is validated using real data collected from a shopping mall. MDPI 2022-08-23 /pmc/articles/PMC9460735/ /pubmed/36080795 http://dx.doi.org/10.3390/s22176338 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Meas, Molika
Machlev, Ram
Kose, Ahmet
Tepljakov, Aleksei
Loo, Lauri
Levron, Yoash
Petlenkov, Eduard
Belikov, Juri
Explainability and Transparency of Classifiers for Air-Handling Unit Faults Using Explainable Artificial Intelligence (XAI)
title Explainability and Transparency of Classifiers for Air-Handling Unit Faults Using Explainable Artificial Intelligence (XAI)
title_full Explainability and Transparency of Classifiers for Air-Handling Unit Faults Using Explainable Artificial Intelligence (XAI)
title_fullStr Explainability and Transparency of Classifiers for Air-Handling Unit Faults Using Explainable Artificial Intelligence (XAI)
title_full_unstemmed Explainability and Transparency of Classifiers for Air-Handling Unit Faults Using Explainable Artificial Intelligence (XAI)
title_short Explainability and Transparency of Classifiers for Air-Handling Unit Faults Using Explainable Artificial Intelligence (XAI)
title_sort explainability and transparency of classifiers for air-handling unit faults using explainable artificial intelligence (xai)
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9460735/
https://www.ncbi.nlm.nih.gov/pubmed/36080795
http://dx.doi.org/10.3390/s22176338
work_keys_str_mv AT measmolika explainabilityandtransparencyofclassifiersforairhandlingunitfaultsusingexplainableartificialintelligencexai
AT machlevram explainabilityandtransparencyofclassifiersforairhandlingunitfaultsusingexplainableartificialintelligencexai
AT koseahmet explainabilityandtransparencyofclassifiersforairhandlingunitfaultsusingexplainableartificialintelligencexai
AT tepljakovaleksei explainabilityandtransparencyofclassifiersforairhandlingunitfaultsusingexplainableartificialintelligencexai
AT loolauri explainabilityandtransparencyofclassifiersforairhandlingunitfaultsusingexplainableartificialintelligencexai
AT levronyoash explainabilityandtransparencyofclassifiersforairhandlingunitfaultsusingexplainableartificialintelligencexai
AT petlenkoveduard explainabilityandtransparencyofclassifiersforairhandlingunitfaultsusingexplainableartificialintelligencexai
AT belikovjuri explainabilityandtransparencyofclassifiersforairhandlingunitfaultsusingexplainableartificialintelligencexai