Cargando…

Variable selection for inferential models with relatively high-dimensional data: Between method heterogeneity and covariate stability as adjuncts to robust selection

Variable selection in inferential modelling is problematic when the number of variables is large relative to the number of data points, especially when multicollinearity is present. A variety of techniques have been described to identify ‘important’ subsets of variables from within a large parameter...

Descripción completa

Detalles Bibliográficos
Autores principales: Lima, Eliana, Davies, Peers, Kaler, Jasmeet, Lovatt, Fiona, Green, Martin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7224285/
https://www.ncbi.nlm.nih.gov/pubmed/32409668
http://dx.doi.org/10.1038/s41598-020-64829-0
_version_ 1783533874018516992
author Lima, Eliana
Davies, Peers
Kaler, Jasmeet
Lovatt, Fiona
Green, Martin
author_facet Lima, Eliana
Davies, Peers
Kaler, Jasmeet
Lovatt, Fiona
Green, Martin
author_sort Lima, Eliana
collection PubMed
description Variable selection in inferential modelling is problematic when the number of variables is large relative to the number of data points, especially when multicollinearity is present. A variety of techniques have been described to identify ‘important’ subsets of variables from within a large parameter space but these may produce different results which creates difficulties with inference and reproducibility. Our aim was evaluate the extent to which variable selection would change depending on statistical approach and whether triangulation across methods could enhance data interpretation. A real dataset containing 408 subjects, 337 explanatory variables and a normally distributed outcome was used. We show that with model hyperparameters optimised to minimise cross validation error, ten methods of automated variable selection produced markedly different results; different variables were selected and model sparsity varied greatly. Comparison between multiple methods provided valuable additional insights. Two variables that were consistently selected and stable across all methods accounted for the majority of the explainable variability; these were the most plausible important candidate variables. Further variables of importance were identified from evaluating selection stability across all methods. In conclusion, triangulation of results across methods, including use of covariate stability, can greatly enhance data interpretation and confidence in variable selection.
format Online
Article
Text
id pubmed-7224285
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-72242852020-05-20 Variable selection for inferential models with relatively high-dimensional data: Between method heterogeneity and covariate stability as adjuncts to robust selection Lima, Eliana Davies, Peers Kaler, Jasmeet Lovatt, Fiona Green, Martin Sci Rep Article Variable selection in inferential modelling is problematic when the number of variables is large relative to the number of data points, especially when multicollinearity is present. A variety of techniques have been described to identify ‘important’ subsets of variables from within a large parameter space but these may produce different results which creates difficulties with inference and reproducibility. Our aim was evaluate the extent to which variable selection would change depending on statistical approach and whether triangulation across methods could enhance data interpretation. A real dataset containing 408 subjects, 337 explanatory variables and a normally distributed outcome was used. We show that with model hyperparameters optimised to minimise cross validation error, ten methods of automated variable selection produced markedly different results; different variables were selected and model sparsity varied greatly. Comparison between multiple methods provided valuable additional insights. Two variables that were consistently selected and stable across all methods accounted for the majority of the explainable variability; these were the most plausible important candidate variables. Further variables of importance were identified from evaluating selection stability across all methods. In conclusion, triangulation of results across methods, including use of covariate stability, can greatly enhance data interpretation and confidence in variable selection. Nature Publishing Group UK 2020-05-14 /pmc/articles/PMC7224285/ /pubmed/32409668 http://dx.doi.org/10.1038/s41598-020-64829-0 Text en © The Author(s) 2020 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
spellingShingle Article
Lima, Eliana
Davies, Peers
Kaler, Jasmeet
Lovatt, Fiona
Green, Martin
Variable selection for inferential models with relatively high-dimensional data: Between method heterogeneity and covariate stability as adjuncts to robust selection
title Variable selection for inferential models with relatively high-dimensional data: Between method heterogeneity and covariate stability as adjuncts to robust selection
title_full Variable selection for inferential models with relatively high-dimensional data: Between method heterogeneity and covariate stability as adjuncts to robust selection
title_fullStr Variable selection for inferential models with relatively high-dimensional data: Between method heterogeneity and covariate stability as adjuncts to robust selection
title_full_unstemmed Variable selection for inferential models with relatively high-dimensional data: Between method heterogeneity and covariate stability as adjuncts to robust selection
title_short Variable selection for inferential models with relatively high-dimensional data: Between method heterogeneity and covariate stability as adjuncts to robust selection
title_sort variable selection for inferential models with relatively high-dimensional data: between method heterogeneity and covariate stability as adjuncts to robust selection
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7224285/
https://www.ncbi.nlm.nih.gov/pubmed/32409668
http://dx.doi.org/10.1038/s41598-020-64829-0
work_keys_str_mv AT limaeliana variableselectionforinferentialmodelswithrelativelyhighdimensionaldatabetweenmethodheterogeneityandcovariatestabilityasadjunctstorobustselection
AT daviespeers variableselectionforinferentialmodelswithrelativelyhighdimensionaldatabetweenmethodheterogeneityandcovariatestabilityasadjunctstorobustselection
AT kalerjasmeet variableselectionforinferentialmodelswithrelativelyhighdimensionaldatabetweenmethodheterogeneityandcovariatestabilityasadjunctstorobustselection
AT lovattfiona variableselectionforinferentialmodelswithrelativelyhighdimensionaldatabetweenmethodheterogeneityandcovariatestabilityasadjunctstorobustselection
AT greenmartin variableselectionforinferentialmodelswithrelativelyhighdimensionaldatabetweenmethodheterogeneityandcovariatestabilityasadjunctstorobustselection