Cargando…
Explainability-Informed Feature Selection and Performance Prediction for Nonintrusive Load Monitoring †
With the massive, worldwide, smart metering roll-out, both energy suppliers and users are starting to tap into the potential of higher resolution energy readings for accurate billing, improved demand response, improved tariffs better tuned to users and the grid, and empowering end-users to know how...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10221163/ https://www.ncbi.nlm.nih.gov/pubmed/37430758 http://dx.doi.org/10.3390/s23104845 |
_version_ | 1785049390755348480 |
---|---|
author | Mollel, Rachel Stephen Stankovic, Lina Stankovic, Vladimir |
author_facet | Mollel, Rachel Stephen Stankovic, Lina Stankovic, Vladimir |
author_sort | Mollel, Rachel Stephen |
collection | PubMed |
description | With the massive, worldwide, smart metering roll-out, both energy suppliers and users are starting to tap into the potential of higher resolution energy readings for accurate billing, improved demand response, improved tariffs better tuned to users and the grid, and empowering end-users to know how much their individual appliances contribute to their electricity bills via nonintrusive load monitoring (NILM). A number of NILM approaches, based on machine learning (ML), have been proposed over the years, focusing on improving the NILM model performance. However, the trustworthiness of the NILM model itself has hardly been addressed. It is important to explain the underlying model and its reasoning to understand why the model underperforms in order to satisfy user curiosity and to enable model improvement. This can be done by leveraging naturally interpretable or explainable models as well as explainability tools. This paper adopts a naturally interpretable decision tree (DT)-based approach for a NILM multiclass classifier. Furthermore, this paper leverages explainability tools to determine local and global feature importance, and design a methodology that informs feature selection for each appliance class, which can determine how well a trained model will predict an appliance on any unseen test data, minimising testing time on target datasets. We explain how one or more appliances can negatively impact classification of other appliances and predict appliance and model performance of the REFIT-data trained models on unseen data of the same house and on unseen houses on the UK-DALE dataset. Experimental results confirm that models trained with the explainability-informed local feature importance can improve toaster classification performance from 65% to 80%. Additionally, instead of one five-classifier approach incorporating all five appliances, a three-classifier approach comprising a kettle, microwave, and dishwasher and a two-classifier comprising a toaster and washing machine improves classification performance for the dishwasher from 72% to 94% and the washing machine from 56% to 80%. |
format | Online Article Text |
id | pubmed-10221163 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-102211632023-05-28 Explainability-Informed Feature Selection and Performance Prediction for Nonintrusive Load Monitoring † Mollel, Rachel Stephen Stankovic, Lina Stankovic, Vladimir Sensors (Basel) Article With the massive, worldwide, smart metering roll-out, both energy suppliers and users are starting to tap into the potential of higher resolution energy readings for accurate billing, improved demand response, improved tariffs better tuned to users and the grid, and empowering end-users to know how much their individual appliances contribute to their electricity bills via nonintrusive load monitoring (NILM). A number of NILM approaches, based on machine learning (ML), have been proposed over the years, focusing on improving the NILM model performance. However, the trustworthiness of the NILM model itself has hardly been addressed. It is important to explain the underlying model and its reasoning to understand why the model underperforms in order to satisfy user curiosity and to enable model improvement. This can be done by leveraging naturally interpretable or explainable models as well as explainability tools. This paper adopts a naturally interpretable decision tree (DT)-based approach for a NILM multiclass classifier. Furthermore, this paper leverages explainability tools to determine local and global feature importance, and design a methodology that informs feature selection for each appliance class, which can determine how well a trained model will predict an appliance on any unseen test data, minimising testing time on target datasets. We explain how one or more appliances can negatively impact classification of other appliances and predict appliance and model performance of the REFIT-data trained models on unseen data of the same house and on unseen houses on the UK-DALE dataset. Experimental results confirm that models trained with the explainability-informed local feature importance can improve toaster classification performance from 65% to 80%. Additionally, instead of one five-classifier approach incorporating all five appliances, a three-classifier approach comprising a kettle, microwave, and dishwasher and a two-classifier comprising a toaster and washing machine improves classification performance for the dishwasher from 72% to 94% and the washing machine from 56% to 80%. MDPI 2023-05-17 /pmc/articles/PMC10221163/ /pubmed/37430758 http://dx.doi.org/10.3390/s23104845 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Mollel, Rachel Stephen Stankovic, Lina Stankovic, Vladimir Explainability-Informed Feature Selection and Performance Prediction for Nonintrusive Load Monitoring † |
title | Explainability-Informed Feature Selection and Performance Prediction for Nonintrusive Load Monitoring † |
title_full | Explainability-Informed Feature Selection and Performance Prediction for Nonintrusive Load Monitoring † |
title_fullStr | Explainability-Informed Feature Selection and Performance Prediction for Nonintrusive Load Monitoring † |
title_full_unstemmed | Explainability-Informed Feature Selection and Performance Prediction for Nonintrusive Load Monitoring † |
title_short | Explainability-Informed Feature Selection and Performance Prediction for Nonintrusive Load Monitoring † |
title_sort | explainability-informed feature selection and performance prediction for nonintrusive load monitoring † |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10221163/ https://www.ncbi.nlm.nih.gov/pubmed/37430758 http://dx.doi.org/10.3390/s23104845 |
work_keys_str_mv | AT mollelrachelstephen explainabilityinformedfeatureselectionandperformancepredictionfornonintrusiveloadmonitoring AT stankoviclina explainabilityinformedfeatureselectionandperformancepredictionfornonintrusiveloadmonitoring AT stankovicvladimir explainabilityinformedfeatureselectionandperformancepredictionfornonintrusiveloadmonitoring |