Cargando…
An Explainable Artificial Intelligence Software Tool for Weight Management Experts (PRIMO): Mixed Methods Study
BACKGROUND: Predicting the likelihood of success of weight loss interventions using machine learning (ML) models may enhance intervention effectiveness by enabling timely and dynamic modification of intervention components for nonresponders to treatment. However, a lack of understanding and trust in...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
JMIR Publications
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10512114/ https://www.ncbi.nlm.nih.gov/pubmed/37672333 http://dx.doi.org/10.2196/42047 |
_version_ | 1785108292125589504 |
---|---|
author | Fernandes, Glenn J Choi, Arthur Schauer, Jacob Michael Pfammatter, Angela F Spring, Bonnie J Darwiche, Adnan Alshurafa, Nabil I |
author_facet | Fernandes, Glenn J Choi, Arthur Schauer, Jacob Michael Pfammatter, Angela F Spring, Bonnie J Darwiche, Adnan Alshurafa, Nabil I |
author_sort | Fernandes, Glenn J |
collection | PubMed |
description | BACKGROUND: Predicting the likelihood of success of weight loss interventions using machine learning (ML) models may enhance intervention effectiveness by enabling timely and dynamic modification of intervention components for nonresponders to treatment. However, a lack of understanding and trust in these ML models impacts adoption among weight management experts. Recent advances in the field of explainable artificial intelligence enable the interpretation of ML models, yet it is unknown whether they enhance model understanding, trust, and adoption among weight management experts. OBJECTIVE: This study aimed to build and evaluate an ML model that can predict 6-month weight loss success (ie, ≥7% weight loss) from 5 engagement and diet-related features collected over the initial 2 weeks of an intervention, to assess whether providing ML-based explanations increases weight management experts’ agreement with ML model predictions, and to inform factors that influence the understanding and trust of ML models to advance explainability in early prediction of weight loss among weight management experts. METHODS: We trained an ML model using the random forest (RF) algorithm and data from a 6-month weight loss intervention (N=419). We leveraged findings from existing explainability metrics to develop Prime Implicant Maintenance of Outcome (PRIMO), an interactive tool to understand predictions made by the RF model. We asked 14 weight management experts to predict hypothetical participants’ weight loss success before and after using PRIMO. We compared PRIMO with 2 other explainability methods, one based on feature ranking and the other based on conditional probability. We used generalized linear mixed-effects models to evaluate participants’ agreement with ML predictions and conducted likelihood ratio tests to examine the relationship between explainability methods and outcomes for nested models. We conducted guided interviews and thematic analysis to study the impact of our tool on experts’ understanding and trust in the model. RESULTS: Our RF model had 81% accuracy in the early prediction of weight loss success. Weight management experts were significantly more likely to agree with the model when using PRIMO (χ(2)=7.9; P=.02) compared with the other 2 methods with odds ratios of 2.52 (95% CI 0.91-7.69) and 3.95 (95% CI 1.50-11.76). From our study, we inferred that our software not only influenced experts’ understanding and trust but also impacted decision-making. Several themes were identified through interviews: preference for multiple explanation types, need to visualize uncertainty in explanations provided by PRIMO, and need for model performance metrics on similar participant test instances. CONCLUSIONS: Our results show the potential for weight management experts to agree with the ML-based early prediction of success in weight loss treatment programs, enabling timely and dynamic modification of intervention components to enhance intervention effectiveness. Our findings provide methods for advancing the understandability and trust of ML models among weight management experts. |
format | Online Article Text |
id | pubmed-10512114 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | JMIR Publications |
record_format | MEDLINE/PubMed |
spelling | pubmed-105121142023-09-22 An Explainable Artificial Intelligence Software Tool for Weight Management Experts (PRIMO): Mixed Methods Study Fernandes, Glenn J Choi, Arthur Schauer, Jacob Michael Pfammatter, Angela F Spring, Bonnie J Darwiche, Adnan Alshurafa, Nabil I J Med Internet Res Original Paper BACKGROUND: Predicting the likelihood of success of weight loss interventions using machine learning (ML) models may enhance intervention effectiveness by enabling timely and dynamic modification of intervention components for nonresponders to treatment. However, a lack of understanding and trust in these ML models impacts adoption among weight management experts. Recent advances in the field of explainable artificial intelligence enable the interpretation of ML models, yet it is unknown whether they enhance model understanding, trust, and adoption among weight management experts. OBJECTIVE: This study aimed to build and evaluate an ML model that can predict 6-month weight loss success (ie, ≥7% weight loss) from 5 engagement and diet-related features collected over the initial 2 weeks of an intervention, to assess whether providing ML-based explanations increases weight management experts’ agreement with ML model predictions, and to inform factors that influence the understanding and trust of ML models to advance explainability in early prediction of weight loss among weight management experts. METHODS: We trained an ML model using the random forest (RF) algorithm and data from a 6-month weight loss intervention (N=419). We leveraged findings from existing explainability metrics to develop Prime Implicant Maintenance of Outcome (PRIMO), an interactive tool to understand predictions made by the RF model. We asked 14 weight management experts to predict hypothetical participants’ weight loss success before and after using PRIMO. We compared PRIMO with 2 other explainability methods, one based on feature ranking and the other based on conditional probability. We used generalized linear mixed-effects models to evaluate participants’ agreement with ML predictions and conducted likelihood ratio tests to examine the relationship between explainability methods and outcomes for nested models. We conducted guided interviews and thematic analysis to study the impact of our tool on experts’ understanding and trust in the model. RESULTS: Our RF model had 81% accuracy in the early prediction of weight loss success. Weight management experts were significantly more likely to agree with the model when using PRIMO (χ(2)=7.9; P=.02) compared with the other 2 methods with odds ratios of 2.52 (95% CI 0.91-7.69) and 3.95 (95% CI 1.50-11.76). From our study, we inferred that our software not only influenced experts’ understanding and trust but also impacted decision-making. Several themes were identified through interviews: preference for multiple explanation types, need to visualize uncertainty in explanations provided by PRIMO, and need for model performance metrics on similar participant test instances. CONCLUSIONS: Our results show the potential for weight management experts to agree with the ML-based early prediction of success in weight loss treatment programs, enabling timely and dynamic modification of intervention components to enhance intervention effectiveness. Our findings provide methods for advancing the understandability and trust of ML models among weight management experts. JMIR Publications 2023-09-06 /pmc/articles/PMC10512114/ /pubmed/37672333 http://dx.doi.org/10.2196/42047 Text en ©Glenn J Fernandes, Arthur Choi, Jacob Michael Schauer, Angela F Pfammatter, Bonnie J Spring, Adnan Darwiche, Nabil I Alshurafa. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 06.09.2023. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included. |
spellingShingle | Original Paper Fernandes, Glenn J Choi, Arthur Schauer, Jacob Michael Pfammatter, Angela F Spring, Bonnie J Darwiche, Adnan Alshurafa, Nabil I An Explainable Artificial Intelligence Software Tool for Weight Management Experts (PRIMO): Mixed Methods Study |
title | An Explainable Artificial Intelligence Software Tool for Weight Management Experts (PRIMO): Mixed Methods Study |
title_full | An Explainable Artificial Intelligence Software Tool for Weight Management Experts (PRIMO): Mixed Methods Study |
title_fullStr | An Explainable Artificial Intelligence Software Tool for Weight Management Experts (PRIMO): Mixed Methods Study |
title_full_unstemmed | An Explainable Artificial Intelligence Software Tool for Weight Management Experts (PRIMO): Mixed Methods Study |
title_short | An Explainable Artificial Intelligence Software Tool for Weight Management Experts (PRIMO): Mixed Methods Study |
title_sort | explainable artificial intelligence software tool for weight management experts (primo): mixed methods study |
topic | Original Paper |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10512114/ https://www.ncbi.nlm.nih.gov/pubmed/37672333 http://dx.doi.org/10.2196/42047 |
work_keys_str_mv | AT fernandesglennj anexplainableartificialintelligencesoftwaretoolforweightmanagementexpertsprimomixedmethodsstudy AT choiarthur anexplainableartificialintelligencesoftwaretoolforweightmanagementexpertsprimomixedmethodsstudy AT schauerjacobmichael anexplainableartificialintelligencesoftwaretoolforweightmanagementexpertsprimomixedmethodsstudy AT pfammatterangelaf anexplainableartificialintelligencesoftwaretoolforweightmanagementexpertsprimomixedmethodsstudy AT springbonniej anexplainableartificialintelligencesoftwaretoolforweightmanagementexpertsprimomixedmethodsstudy AT darwicheadnan anexplainableartificialintelligencesoftwaretoolforweightmanagementexpertsprimomixedmethodsstudy AT alshurafanabili anexplainableartificialintelligencesoftwaretoolforweightmanagementexpertsprimomixedmethodsstudy AT fernandesglennj explainableartificialintelligencesoftwaretoolforweightmanagementexpertsprimomixedmethodsstudy AT choiarthur explainableartificialintelligencesoftwaretoolforweightmanagementexpertsprimomixedmethodsstudy AT schauerjacobmichael explainableartificialintelligencesoftwaretoolforweightmanagementexpertsprimomixedmethodsstudy AT pfammatterangelaf explainableartificialintelligencesoftwaretoolforweightmanagementexpertsprimomixedmethodsstudy AT springbonniej explainableartificialintelligencesoftwaretoolforweightmanagementexpertsprimomixedmethodsstudy AT darwicheadnan explainableartificialintelligencesoftwaretoolforweightmanagementexpertsprimomixedmethodsstudy AT alshurafanabili explainableartificialintelligencesoftwaretoolforweightmanagementexpertsprimomixedmethodsstudy |