Cargando…
Fast and accurate interpretation of workload classification model
How can we interpret predictions of a workload classification model? A workload is a sequence of operations executed in DRAM, where each operation contains a command and an address. Classifying a given sequence into a correct workload type is important for verifying the quality of DRAM. Although a p...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9987804/ https://www.ncbi.nlm.nih.gov/pubmed/36877703 http://dx.doi.org/10.1371/journal.pone.0282595 |
_version_ | 1784901454787510272 |
---|---|
author | Shim, Sooyeon Kim, Doyeon Jang, Jun-Gi Chae, Suhyun Lee, Jeeyong Kang, U. |
author_facet | Shim, Sooyeon Kim, Doyeon Jang, Jun-Gi Chae, Suhyun Lee, Jeeyong Kang, U. |
author_sort | Shim, Sooyeon |
collection | PubMed |
description | How can we interpret predictions of a workload classification model? A workload is a sequence of operations executed in DRAM, where each operation contains a command and an address. Classifying a given sequence into a correct workload type is important for verifying the quality of DRAM. Although a previous model achieves a reasonable accuracy on workload classification, it is challenging to interpret the prediction results since it is a black box model. A promising direction is to exploit interpretation models which compute the amount of attribution each feature gives to the prediction. However, none of the existing interpretable models are tailored for workload classification. The main challenges to be addressed are to 1) provide interpretable features for further improving interpretability, 2) measure the similarity of features for constructing the interpretable super features, and 3) provide consistent interpretations over all instances. In this paper, we propose INFO (INterpretable model For wOrkload classification), a model-agnostic interpretable model which analyzes workload classification results. INFO provides interpretable results while producing accurate predictions. We design super features to enhance interpretability by hierarchically clustering original features used for the classifier. To generate the super features, we define and measure the interpretability-friendly similarity, a variant of Jaccard similarity between original features. Then, INFO globally explains the workload classification model by generalizing super features over all instances. Experiments show that INFO provides intuitive interpretations which are faithful to the original non-interpretable model. INFO also shows up to 2.0× faster running time than the competitor while having comparable accuracies for real-world workload datasets. |
format | Online Article Text |
id | pubmed-9987804 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-99878042023-03-07 Fast and accurate interpretation of workload classification model Shim, Sooyeon Kim, Doyeon Jang, Jun-Gi Chae, Suhyun Lee, Jeeyong Kang, U. PLoS One Research Article How can we interpret predictions of a workload classification model? A workload is a sequence of operations executed in DRAM, where each operation contains a command and an address. Classifying a given sequence into a correct workload type is important for verifying the quality of DRAM. Although a previous model achieves a reasonable accuracy on workload classification, it is challenging to interpret the prediction results since it is a black box model. A promising direction is to exploit interpretation models which compute the amount of attribution each feature gives to the prediction. However, none of the existing interpretable models are tailored for workload classification. The main challenges to be addressed are to 1) provide interpretable features for further improving interpretability, 2) measure the similarity of features for constructing the interpretable super features, and 3) provide consistent interpretations over all instances. In this paper, we propose INFO (INterpretable model For wOrkload classification), a model-agnostic interpretable model which analyzes workload classification results. INFO provides interpretable results while producing accurate predictions. We design super features to enhance interpretability by hierarchically clustering original features used for the classifier. To generate the super features, we define and measure the interpretability-friendly similarity, a variant of Jaccard similarity between original features. Then, INFO globally explains the workload classification model by generalizing super features over all instances. Experiments show that INFO provides intuitive interpretations which are faithful to the original non-interpretable model. INFO also shows up to 2.0× faster running time than the competitor while having comparable accuracies for real-world workload datasets. Public Library of Science 2023-03-06 /pmc/articles/PMC9987804/ /pubmed/36877703 http://dx.doi.org/10.1371/journal.pone.0282595 Text en © 2023 Shim et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Shim, Sooyeon Kim, Doyeon Jang, Jun-Gi Chae, Suhyun Lee, Jeeyong Kang, U. Fast and accurate interpretation of workload classification model |
title | Fast and accurate interpretation of workload classification model |
title_full | Fast and accurate interpretation of workload classification model |
title_fullStr | Fast and accurate interpretation of workload classification model |
title_full_unstemmed | Fast and accurate interpretation of workload classification model |
title_short | Fast and accurate interpretation of workload classification model |
title_sort | fast and accurate interpretation of workload classification model |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9987804/ https://www.ncbi.nlm.nih.gov/pubmed/36877703 http://dx.doi.org/10.1371/journal.pone.0282595 |
work_keys_str_mv | AT shimsooyeon fastandaccurateinterpretationofworkloadclassificationmodel AT kimdoyeon fastandaccurateinterpretationofworkloadclassificationmodel AT jangjungi fastandaccurateinterpretationofworkloadclassificationmodel AT chaesuhyun fastandaccurateinterpretationofworkloadclassificationmodel AT leejeeyong fastandaccurateinterpretationofworkloadclassificationmodel AT kangu fastandaccurateinterpretationofworkloadclassificationmodel |