Cargando…

Extensions of the External Validation for Checking Learned Model Interpretability and Generalizability

We discuss the validation of machine learning models, which is standard practice in determining model efficacy and generalizability. We argue that internal validation approaches, such as cross-validation and bootstrap, cannot guarantee the quality of a machine learning model due to potentially biase...

Descripción completa

Detalles Bibliográficos
Autores principales: Ho, Sung Yang, Phua, Kimberly, Wong, Limsoon, Bin Goh, Wilson Wen
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7691387/
https://www.ncbi.nlm.nih.gov/pubmed/33294870
http://dx.doi.org/10.1016/j.patter.2020.100129
_version_ 1783614279363067904
author Ho, Sung Yang
Phua, Kimberly
Wong, Limsoon
Bin Goh, Wilson Wen
author_facet Ho, Sung Yang
Phua, Kimberly
Wong, Limsoon
Bin Goh, Wilson Wen
author_sort Ho, Sung Yang
collection PubMed
description We discuss the validation of machine learning models, which is standard practice in determining model efficacy and generalizability. We argue that internal validation approaches, such as cross-validation and bootstrap, cannot guarantee the quality of a machine learning model due to potentially biased training data and the complexity of the validation procedure itself. For better evaluating the generalization ability of a learned model, we suggest leveraging on external data sources from elsewhere as validation datasets, namely external validation. Due to the lack of research attractions on external validation, especially a well-structured and comprehensive study, we discuss the necessity for external validation and propose two extensions of the external validation approach that may help reveal the true domain-relevant model from a candidate set. Moreover, we also suggest a procedure to check whether a set of validation datasets is valid and introduce statistical reference points for detecting external data problems.
format Online
Article
Text
id pubmed-7691387
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-76913872020-12-07 Extensions of the External Validation for Checking Learned Model Interpretability and Generalizability Ho, Sung Yang Phua, Kimberly Wong, Limsoon Bin Goh, Wilson Wen Patterns (N Y) Perspective We discuss the validation of machine learning models, which is standard practice in determining model efficacy and generalizability. We argue that internal validation approaches, such as cross-validation and bootstrap, cannot guarantee the quality of a machine learning model due to potentially biased training data and the complexity of the validation procedure itself. For better evaluating the generalization ability of a learned model, we suggest leveraging on external data sources from elsewhere as validation datasets, namely external validation. Due to the lack of research attractions on external validation, especially a well-structured and comprehensive study, we discuss the necessity for external validation and propose two extensions of the external validation approach that may help reveal the true domain-relevant model from a candidate set. Moreover, we also suggest a procedure to check whether a set of validation datasets is valid and introduce statistical reference points for detecting external data problems. Elsevier 2020-11-13 /pmc/articles/PMC7691387/ /pubmed/33294870 http://dx.doi.org/10.1016/j.patter.2020.100129 Text en © 2020 The Authors http://creativecommons.org/licenses/by-nc-nd/4.0/ This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
spellingShingle Perspective
Ho, Sung Yang
Phua, Kimberly
Wong, Limsoon
Bin Goh, Wilson Wen
Extensions of the External Validation for Checking Learned Model Interpretability and Generalizability
title Extensions of the External Validation for Checking Learned Model Interpretability and Generalizability
title_full Extensions of the External Validation for Checking Learned Model Interpretability and Generalizability
title_fullStr Extensions of the External Validation for Checking Learned Model Interpretability and Generalizability
title_full_unstemmed Extensions of the External Validation for Checking Learned Model Interpretability and Generalizability
title_short Extensions of the External Validation for Checking Learned Model Interpretability and Generalizability
title_sort extensions of the external validation for checking learned model interpretability and generalizability
topic Perspective
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7691387/
https://www.ncbi.nlm.nih.gov/pubmed/33294870
http://dx.doi.org/10.1016/j.patter.2020.100129
work_keys_str_mv AT hosungyang extensionsoftheexternalvalidationforcheckinglearnedmodelinterpretabilityandgeneralizability
AT phuakimberly extensionsoftheexternalvalidationforcheckinglearnedmodelinterpretabilityandgeneralizability
AT wonglimsoon extensionsoftheexternalvalidationforcheckinglearnedmodelinterpretabilityandgeneralizability
AT bingohwilsonwen extensionsoftheexternalvalidationforcheckinglearnedmodelinterpretabilityandgeneralizability