Cargando…

Interpretability of Input Representations for Gait Classification in Patients after Total Hip Arthroplasty

Many machine learning models show black box characteristics and, therefore, a lack of transparency, interpretability, and trustworthiness. This strongly limits their practical application in clinical contexts. For overcoming these limitations, Explainable Artificial Intelligence (XAI) has shown prom...

Descripción completa

Detalles Bibliográficos
Autores principales: Dindorf, Carlo, Teufl, Wolfgang, Taetz, Bertram, Bleser, Gabriele, Fröhlich, Michael
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7471970/
https://www.ncbi.nlm.nih.gov/pubmed/32781583
http://dx.doi.org/10.3390/s20164385
_version_ 1783578881477836800
author Dindorf, Carlo
Teufl, Wolfgang
Taetz, Bertram
Bleser, Gabriele
Fröhlich, Michael
author_facet Dindorf, Carlo
Teufl, Wolfgang
Taetz, Bertram
Bleser, Gabriele
Fröhlich, Michael
author_sort Dindorf, Carlo
collection PubMed
description Many machine learning models show black box characteristics and, therefore, a lack of transparency, interpretability, and trustworthiness. This strongly limits their practical application in clinical contexts. For overcoming these limitations, Explainable Artificial Intelligence (XAI) has shown promising results. The current study examined the influence of different input representations on a trained model’s accuracy, interpretability, as well as clinical relevancy using XAI methods. The gait of 27 healthy subjects and 20 subjects after total hip arthroplasty (THA) was recorded with an inertial measurement unit (IMU)-based system. Three different input representations were used for classification. Local Interpretable Model-Agnostic Explanations (LIME) was used for model interpretation. The best accuracy was achieved with automatically extracted features (mean accuracy M(acc) = 100%), followed by features based on simple descriptive statistics (M(acc) = 97.38%) and waveform data (M(acc) = 95.88%). Globally seen, sagittal movement of the hip, knee, and pelvis as well as transversal movement of the ankle were especially important for this specific classification task. The current work shows that the type of input representation crucially determines interpretability as well as clinical relevance. A combined approach using different forms of representations seems advantageous. The results might assist physicians and therapists finding and addressing individual pathologic gait patterns.
format Online
Article
Text
id pubmed-7471970
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-74719702020-09-17 Interpretability of Input Representations for Gait Classification in Patients after Total Hip Arthroplasty Dindorf, Carlo Teufl, Wolfgang Taetz, Bertram Bleser, Gabriele Fröhlich, Michael Sensors (Basel) Article Many machine learning models show black box characteristics and, therefore, a lack of transparency, interpretability, and trustworthiness. This strongly limits their practical application in clinical contexts. For overcoming these limitations, Explainable Artificial Intelligence (XAI) has shown promising results. The current study examined the influence of different input representations on a trained model’s accuracy, interpretability, as well as clinical relevancy using XAI methods. The gait of 27 healthy subjects and 20 subjects after total hip arthroplasty (THA) was recorded with an inertial measurement unit (IMU)-based system. Three different input representations were used for classification. Local Interpretable Model-Agnostic Explanations (LIME) was used for model interpretation. The best accuracy was achieved with automatically extracted features (mean accuracy M(acc) = 100%), followed by features based on simple descriptive statistics (M(acc) = 97.38%) and waveform data (M(acc) = 95.88%). Globally seen, sagittal movement of the hip, knee, and pelvis as well as transversal movement of the ankle were especially important for this specific classification task. The current work shows that the type of input representation crucially determines interpretability as well as clinical relevance. A combined approach using different forms of representations seems advantageous. The results might assist physicians and therapists finding and addressing individual pathologic gait patterns. MDPI 2020-08-06 /pmc/articles/PMC7471970/ /pubmed/32781583 http://dx.doi.org/10.3390/s20164385 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Dindorf, Carlo
Teufl, Wolfgang
Taetz, Bertram
Bleser, Gabriele
Fröhlich, Michael
Interpretability of Input Representations for Gait Classification in Patients after Total Hip Arthroplasty
title Interpretability of Input Representations for Gait Classification in Patients after Total Hip Arthroplasty
title_full Interpretability of Input Representations for Gait Classification in Patients after Total Hip Arthroplasty
title_fullStr Interpretability of Input Representations for Gait Classification in Patients after Total Hip Arthroplasty
title_full_unstemmed Interpretability of Input Representations for Gait Classification in Patients after Total Hip Arthroplasty
title_short Interpretability of Input Representations for Gait Classification in Patients after Total Hip Arthroplasty
title_sort interpretability of input representations for gait classification in patients after total hip arthroplasty
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7471970/
https://www.ncbi.nlm.nih.gov/pubmed/32781583
http://dx.doi.org/10.3390/s20164385
work_keys_str_mv AT dindorfcarlo interpretabilityofinputrepresentationsforgaitclassificationinpatientsaftertotalhiparthroplasty
AT teuflwolfgang interpretabilityofinputrepresentationsforgaitclassificationinpatientsaftertotalhiparthroplasty
AT taetzbertram interpretabilityofinputrepresentationsforgaitclassificationinpatientsaftertotalhiparthroplasty
AT blesergabriele interpretabilityofinputrepresentationsforgaitclassificationinpatientsaftertotalhiparthroplasty
AT frohlichmichael interpretabilityofinputrepresentationsforgaitclassificationinpatientsaftertotalhiparthroplasty