Cargando…
Modeling competence development in the presence of selection bias
A major challenge for representative longitudinal studies is panel attrition, because some respondents refuse to continue participating across all measurement waves. Depending on the nature of this selection process, statistical inferences based on the observed sample can be biased. Therefore, stati...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer US
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6267521/ https://www.ncbi.nlm.nih.gov/pubmed/29450705 http://dx.doi.org/10.3758/s13428-018-1021-z |
_version_ | 1783376093980393472 |
---|---|
author | Zinn, Sabine Gnambs, Timo |
author_facet | Zinn, Sabine Gnambs, Timo |
author_sort | Zinn, Sabine |
collection | PubMed |
description | A major challenge for representative longitudinal studies is panel attrition, because some respondents refuse to continue participating across all measurement waves. Depending on the nature of this selection process, statistical inferences based on the observed sample can be biased. Therefore, statistical analyses need to consider a missing-data mechanism. Because each missing-data model hinges on frequently untestable assumptions, sensitivity analyses are indispensable to gauging the robustness of statistical inferences. This article highlights contemporary approaches for applied researchers to acknowledge missing data in longitudinal, multilevel modeling and shows how sensitivity analyses can guide their interpretation. Using a representative sample of N = 13,417 German students, the development of mathematical competence across three years was examined by contrasting seven missing-data models, including listwise deletion, full-information maximum likelihood estimation, inverse probability weighting, multiple imputation, selection models, and pattern mixture models. These analyses identified strong selection effects related to various individual and context factors. Comparative analyses revealed that inverse probability weighting performed rather poorly in growth curve modeling. Moreover, school-specific effects should be acknowledged in missing-data models for educational data. Finally, we demonstrated how sensitivity analyses can be used to gauge the robustness of the identified effects. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.3758/s13428-018-1021-z) contains supplementary material, which is available to authorized users. |
format | Online Article Text |
id | pubmed-6267521 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2018 |
publisher | Springer US |
record_format | MEDLINE/PubMed |
spelling | pubmed-62675212018-12-11 Modeling competence development in the presence of selection bias Zinn, Sabine Gnambs, Timo Behav Res Methods Article A major challenge for representative longitudinal studies is panel attrition, because some respondents refuse to continue participating across all measurement waves. Depending on the nature of this selection process, statistical inferences based on the observed sample can be biased. Therefore, statistical analyses need to consider a missing-data mechanism. Because each missing-data model hinges on frequently untestable assumptions, sensitivity analyses are indispensable to gauging the robustness of statistical inferences. This article highlights contemporary approaches for applied researchers to acknowledge missing data in longitudinal, multilevel modeling and shows how sensitivity analyses can guide their interpretation. Using a representative sample of N = 13,417 German students, the development of mathematical competence across three years was examined by contrasting seven missing-data models, including listwise deletion, full-information maximum likelihood estimation, inverse probability weighting, multiple imputation, selection models, and pattern mixture models. These analyses identified strong selection effects related to various individual and context factors. Comparative analyses revealed that inverse probability weighting performed rather poorly in growth curve modeling. Moreover, school-specific effects should be acknowledged in missing-data models for educational data. Finally, we demonstrated how sensitivity analyses can be used to gauge the robustness of the identified effects. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.3758/s13428-018-1021-z) contains supplementary material, which is available to authorized users. Springer US 2018-02-15 2018 /pmc/articles/PMC6267521/ /pubmed/29450705 http://dx.doi.org/10.3758/s13428-018-1021-z Text en © The Author(s) 2018 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. |
spellingShingle | Article Zinn, Sabine Gnambs, Timo Modeling competence development in the presence of selection bias |
title | Modeling competence development in the presence of selection bias |
title_full | Modeling competence development in the presence of selection bias |
title_fullStr | Modeling competence development in the presence of selection bias |
title_full_unstemmed | Modeling competence development in the presence of selection bias |
title_short | Modeling competence development in the presence of selection bias |
title_sort | modeling competence development in the presence of selection bias |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6267521/ https://www.ncbi.nlm.nih.gov/pubmed/29450705 http://dx.doi.org/10.3758/s13428-018-1021-z |
work_keys_str_mv | AT zinnsabine modelingcompetencedevelopmentinthepresenceofselectionbias AT gnambstimo modelingcompetencedevelopmentinthepresenceofselectionbias |