Cargando…
Opening the black box: interpretable machine learning for predictor finding of metabolic syndrome
OBJECTIVE: The internal workings ofmachine learning algorithms are complex and considered as low-interpretation "black box" models, making it difficult for domain experts to understand and trust these complex models. The study uses metabolic syndrome (MetS) as the entry point to analyze an...
Autores principales: | , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9419421/ https://www.ncbi.nlm.nih.gov/pubmed/36028865 http://dx.doi.org/10.1186/s12902-022-01121-4 |
_version_ | 1784777171357663232 |
---|---|
author | Zhang, Yan Zhang, Xiaoxu Razbek, Jaina Li, Deyang Xia, Wenjun Bao, Liangliang Mao, Hongkai Daken, Mayisha Cao, Mingqin |
author_facet | Zhang, Yan Zhang, Xiaoxu Razbek, Jaina Li, Deyang Xia, Wenjun Bao, Liangliang Mao, Hongkai Daken, Mayisha Cao, Mingqin |
author_sort | Zhang, Yan |
collection | PubMed |
description | OBJECTIVE: The internal workings ofmachine learning algorithms are complex and considered as low-interpretation "black box" models, making it difficult for domain experts to understand and trust these complex models. The study uses metabolic syndrome (MetS) as the entry point to analyze and evaluate the application value of model interpretability methods in dealing with difficult interpretation of predictive models. METHODS: The study collects data from a chain of health examination institution in Urumqi from 2017 ~ 2019, and performs 39,134 remaining data after preprocessing such as deletion and filling. RFE is used for feature selection to reduce redundancy; MetS risk prediction models (logistic, random forest, XGBoost) are built based on a feature subset, and accuracy, sensitivity, specificity, Youden index, and AUROC value are used to evaluate the model classification performance; post-hoc model-agnostic interpretation methods (variable importance, LIME) are used to interpret the results of the predictive model. RESULTS: Eighteen physical examination indicators are screened out by RFE, which can effectively solve the problem of physical examination data redundancy. Random forest and XGBoost models have higher accuracy, sensitivity, specificity, Youden index, and AUROC values compared with logistic regression. XGBoost models have higher sensitivity, Youden index, and AUROC values compared with random forest. The study uses variable importance, LIME and PDP for global and local interpretation of the optimal MetS risk prediction model (XGBoost), and different interpretation methods have different insights into the interpretation of model results, which are more flexible in model selection and can visualize the process and reasons for the model to make decisions. The interpretable risk prediction model in this study can help to identify risk factors associated with MetS, and the results showed that in addition to the traditional risk factors such as overweight and obesity, hyperglycemia, hypertension, and dyslipidemia, MetS was also associated with other factors, including age, creatinine, uric acid, and alkaline phosphatase. CONCLUSION: The model interpretability methods are applied to the black box model, which can not only realize the flexibility of model application, but also make up for the uninterpretable defects of the model. Model interpretability methods can be used as a novel means of identifying variables that are more likely to be good predictors. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12902-022-01121-4. |
format | Online Article Text |
id | pubmed-9419421 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-94194212022-08-28 Opening the black box: interpretable machine learning for predictor finding of metabolic syndrome Zhang, Yan Zhang, Xiaoxu Razbek, Jaina Li, Deyang Xia, Wenjun Bao, Liangliang Mao, Hongkai Daken, Mayisha Cao, Mingqin BMC Endocr Disord Research OBJECTIVE: The internal workings ofmachine learning algorithms are complex and considered as low-interpretation "black box" models, making it difficult for domain experts to understand and trust these complex models. The study uses metabolic syndrome (MetS) as the entry point to analyze and evaluate the application value of model interpretability methods in dealing with difficult interpretation of predictive models. METHODS: The study collects data from a chain of health examination institution in Urumqi from 2017 ~ 2019, and performs 39,134 remaining data after preprocessing such as deletion and filling. RFE is used for feature selection to reduce redundancy; MetS risk prediction models (logistic, random forest, XGBoost) are built based on a feature subset, and accuracy, sensitivity, specificity, Youden index, and AUROC value are used to evaluate the model classification performance; post-hoc model-agnostic interpretation methods (variable importance, LIME) are used to interpret the results of the predictive model. RESULTS: Eighteen physical examination indicators are screened out by RFE, which can effectively solve the problem of physical examination data redundancy. Random forest and XGBoost models have higher accuracy, sensitivity, specificity, Youden index, and AUROC values compared with logistic regression. XGBoost models have higher sensitivity, Youden index, and AUROC values compared with random forest. The study uses variable importance, LIME and PDP for global and local interpretation of the optimal MetS risk prediction model (XGBoost), and different interpretation methods have different insights into the interpretation of model results, which are more flexible in model selection and can visualize the process and reasons for the model to make decisions. The interpretable risk prediction model in this study can help to identify risk factors associated with MetS, and the results showed that in addition to the traditional risk factors such as overweight and obesity, hyperglycemia, hypertension, and dyslipidemia, MetS was also associated with other factors, including age, creatinine, uric acid, and alkaline phosphatase. CONCLUSION: The model interpretability methods are applied to the black box model, which can not only realize the flexibility of model application, but also make up for the uninterpretable defects of the model. Model interpretability methods can be used as a novel means of identifying variables that are more likely to be good predictors. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12902-022-01121-4. BioMed Central 2022-08-26 /pmc/articles/PMC9419421/ /pubmed/36028865 http://dx.doi.org/10.1186/s12902-022-01121-4 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data. |
spellingShingle | Research Zhang, Yan Zhang, Xiaoxu Razbek, Jaina Li, Deyang Xia, Wenjun Bao, Liangliang Mao, Hongkai Daken, Mayisha Cao, Mingqin Opening the black box: interpretable machine learning for predictor finding of metabolic syndrome |
title | Opening the black box: interpretable machine learning for predictor finding of metabolic syndrome |
title_full | Opening the black box: interpretable machine learning for predictor finding of metabolic syndrome |
title_fullStr | Opening the black box: interpretable machine learning for predictor finding of metabolic syndrome |
title_full_unstemmed | Opening the black box: interpretable machine learning for predictor finding of metabolic syndrome |
title_short | Opening the black box: interpretable machine learning for predictor finding of metabolic syndrome |
title_sort | opening the black box: interpretable machine learning for predictor finding of metabolic syndrome |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9419421/ https://www.ncbi.nlm.nih.gov/pubmed/36028865 http://dx.doi.org/10.1186/s12902-022-01121-4 |
work_keys_str_mv | AT zhangyan openingtheblackboxinterpretablemachinelearningforpredictorfindingofmetabolicsyndrome AT zhangxiaoxu openingtheblackboxinterpretablemachinelearningforpredictorfindingofmetabolicsyndrome AT razbekjaina openingtheblackboxinterpretablemachinelearningforpredictorfindingofmetabolicsyndrome AT lideyang openingtheblackboxinterpretablemachinelearningforpredictorfindingofmetabolicsyndrome AT xiawenjun openingtheblackboxinterpretablemachinelearningforpredictorfindingofmetabolicsyndrome AT baoliangliang openingtheblackboxinterpretablemachinelearningforpredictorfindingofmetabolicsyndrome AT maohongkai openingtheblackboxinterpretablemachinelearningforpredictorfindingofmetabolicsyndrome AT dakenmayisha openingtheblackboxinterpretablemachinelearningforpredictorfindingofmetabolicsyndrome AT caomingqin openingtheblackboxinterpretablemachinelearningforpredictorfindingofmetabolicsyndrome |