Cargando…
Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction
Image classification is a very popular machine learning domain in which deep convolutional neural networks have mainly emerged on such applications. These networks manage to achieve remarkable performance in terms of prediction accuracy but they are considered as black box models since they lack the...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8321040/ https://www.ncbi.nlm.nih.gov/pubmed/34460583 http://dx.doi.org/10.3390/jimaging6060037 |
_version_ | 1783730756896423936 |
---|---|
author | Pintelas, Emmanuel Liaskos, Meletis Livieris, Ioannis E. Kotsiantis, Sotiris Pintelas, Panagiotis |
author_facet | Pintelas, Emmanuel Liaskos, Meletis Livieris, Ioannis E. Kotsiantis, Sotiris Pintelas, Panagiotis |
author_sort | Pintelas, Emmanuel |
collection | PubMed |
description | Image classification is a very popular machine learning domain in which deep convolutional neural networks have mainly emerged on such applications. These networks manage to achieve remarkable performance in terms of prediction accuracy but they are considered as black box models since they lack the ability to interpret their inner working mechanism and explain the main reasoning of their predictions. There is a variety of real world tasks, such as medical applications, in which interpretability and explainability play a significant role. Making decisions on critical issues such as cancer prediction utilizing black box models in order to achieve high prediction accuracy but without provision for any sort of explanation for its prediction, accuracy cannot be considered as sufficient and ethnically acceptable. Reasoning and explanation is essential in order to trust these models and support such critical predictions. Nevertheless, the definition and the validation of the quality of a prediction model’s explanation can be considered in general extremely subjective and unclear. In this work, an accurate and interpretable machine learning framework is proposed, for image classification problems able to make high quality explanations. For this task, it is developed a feature extraction and explanation extraction framework, proposing also three basic general conditions which validate the quality of any model’s prediction explanation for any application domain. The feature extraction framework will extract and create transparent and meaningful high level features for images, while the explanation extraction framework will be responsible for creating good explanations relying on these extracted features and the prediction model’s inner function with respect to the proposed conditions. As a case study application, brain tumor magnetic resonance images were utilized for predicting glioma cancer. Our results demonstrate the efficiency of the proposed model since it managed to achieve sufficient prediction accuracy being also interpretable and explainable in simple human terms. |
format | Online Article Text |
id | pubmed-8321040 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-83210402021-08-26 Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction Pintelas, Emmanuel Liaskos, Meletis Livieris, Ioannis E. Kotsiantis, Sotiris Pintelas, Panagiotis J Imaging Article Image classification is a very popular machine learning domain in which deep convolutional neural networks have mainly emerged on such applications. These networks manage to achieve remarkable performance in terms of prediction accuracy but they are considered as black box models since they lack the ability to interpret their inner working mechanism and explain the main reasoning of their predictions. There is a variety of real world tasks, such as medical applications, in which interpretability and explainability play a significant role. Making decisions on critical issues such as cancer prediction utilizing black box models in order to achieve high prediction accuracy but without provision for any sort of explanation for its prediction, accuracy cannot be considered as sufficient and ethnically acceptable. Reasoning and explanation is essential in order to trust these models and support such critical predictions. Nevertheless, the definition and the validation of the quality of a prediction model’s explanation can be considered in general extremely subjective and unclear. In this work, an accurate and interpretable machine learning framework is proposed, for image classification problems able to make high quality explanations. For this task, it is developed a feature extraction and explanation extraction framework, proposing also three basic general conditions which validate the quality of any model’s prediction explanation for any application domain. The feature extraction framework will extract and create transparent and meaningful high level features for images, while the explanation extraction framework will be responsible for creating good explanations relying on these extracted features and the prediction model’s inner function with respect to the proposed conditions. As a case study application, brain tumor magnetic resonance images were utilized for predicting glioma cancer. Our results demonstrate the efficiency of the proposed model since it managed to achieve sufficient prediction accuracy being also interpretable and explainable in simple human terms. MDPI 2020-05-28 /pmc/articles/PMC8321040/ /pubmed/34460583 http://dx.doi.org/10.3390/jimaging6060037 Text en © 2020 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) ). |
spellingShingle | Article Pintelas, Emmanuel Liaskos, Meletis Livieris, Ioannis E. Kotsiantis, Sotiris Pintelas, Panagiotis Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction |
title | Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction |
title_full | Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction |
title_fullStr | Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction |
title_full_unstemmed | Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction |
title_short | Explainable Machine Learning Framework for Image Classification Problems: Case Study on Glioma Cancer Prediction |
title_sort | explainable machine learning framework for image classification problems: case study on glioma cancer prediction |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8321040/ https://www.ncbi.nlm.nih.gov/pubmed/34460583 http://dx.doi.org/10.3390/jimaging6060037 |
work_keys_str_mv | AT pintelasemmanuel explainablemachinelearningframeworkforimageclassificationproblemscasestudyongliomacancerprediction AT liaskosmeletis explainablemachinelearningframeworkforimageclassificationproblemscasestudyongliomacancerprediction AT livierisioannise explainablemachinelearningframeworkforimageclassificationproblemscasestudyongliomacancerprediction AT kotsiantissotiris explainablemachinelearningframeworkforimageclassificationproblemscasestudyongliomacancerprediction AT pintelaspanagiotis explainablemachinelearningframeworkforimageclassificationproblemscasestudyongliomacancerprediction |