Cargando…
Comparison of different scoring methods based on latent variable models of the PHQ-9: an individual participant data meta-analysis
BACKGROUND: Previous research on the depression scale of the Patient Health Questionnaire (PHQ-9) has found that different latent factor models have maximized empirical measures of goodness-of-fit. The clinical relevance of these differences is unclear. We aimed to investigate whether depression scr...
Autores principales: | , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Cambridge University Press
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9393567/ https://www.ncbi.nlm.nih.gov/pubmed/33612144 http://dx.doi.org/10.1017/S0033291721000131 |
_version_ | 1784771297679507456 |
---|---|
author | Fischer, Felix Levis, Brooke Falk, Carl Sun, Ying Ioannidis, John P. A. Cuijpers, Pim Shrier, Ian Benedetti, Andrea Thombs, Brett D. |
author_facet | Fischer, Felix Levis, Brooke Falk, Carl Sun, Ying Ioannidis, John P. A. Cuijpers, Pim Shrier, Ian Benedetti, Andrea Thombs, Brett D. |
author_sort | Fischer, Felix |
collection | PubMed |
description | BACKGROUND: Previous research on the depression scale of the Patient Health Questionnaire (PHQ-9) has found that different latent factor models have maximized empirical measures of goodness-of-fit. The clinical relevance of these differences is unclear. We aimed to investigate whether depression screening accuracy may be improved by employing latent factor model-based scoring rather than sum scores. METHODS: We used an individual participant data meta-analysis (IPDMA) database compiled to assess the screening accuracy of the PHQ-9. We included studies that used the Structured Clinical Interview for DSM (SCID) as a reference standard and split those into calibration and validation datasets. In the calibration dataset, we estimated unidimensional, two-dimensional (separating cognitive/affective and somatic symptoms of depression), and bi-factor models, and the respective cut-offs to maximize combined sensitivity and specificity. In the validation dataset, we assessed the differences in (combined) sensitivity and specificity between the latent variable approaches and the optimal sum score (⩾10), using bootstrapping to estimate 95% confidence intervals for the differences. RESULTS: The calibration dataset included 24 studies (4378 participants, 652 major depression cases); the validation dataset 17 studies (4252 participants, 568 cases). In the validation dataset, optimal cut-offs of the unidimensional, two-dimensional, and bi-factor models had higher sensitivity (by 0.036, 0.050, 0.049 points, respectively) but lower specificity (0.017, 0.026, 0.019, respectively) compared to the sum score cut-off of ⩾10. CONCLUSIONS: In a comprehensive dataset of diagnostic studies, scoring using complex latent variable models do not improve screening accuracy of the PHQ-9 meaningfully as compared to the simple sum score approach. |
format | Online Article Text |
id | pubmed-9393567 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Cambridge University Press |
record_format | MEDLINE/PubMed |
spelling | pubmed-93935672022-08-22 Comparison of different scoring methods based on latent variable models of the PHQ-9: an individual participant data meta-analysis Fischer, Felix Levis, Brooke Falk, Carl Sun, Ying Ioannidis, John P. A. Cuijpers, Pim Shrier, Ian Benedetti, Andrea Thombs, Brett D. Psychol Med Original Article BACKGROUND: Previous research on the depression scale of the Patient Health Questionnaire (PHQ-9) has found that different latent factor models have maximized empirical measures of goodness-of-fit. The clinical relevance of these differences is unclear. We aimed to investigate whether depression screening accuracy may be improved by employing latent factor model-based scoring rather than sum scores. METHODS: We used an individual participant data meta-analysis (IPDMA) database compiled to assess the screening accuracy of the PHQ-9. We included studies that used the Structured Clinical Interview for DSM (SCID) as a reference standard and split those into calibration and validation datasets. In the calibration dataset, we estimated unidimensional, two-dimensional (separating cognitive/affective and somatic symptoms of depression), and bi-factor models, and the respective cut-offs to maximize combined sensitivity and specificity. In the validation dataset, we assessed the differences in (combined) sensitivity and specificity between the latent variable approaches and the optimal sum score (⩾10), using bootstrapping to estimate 95% confidence intervals for the differences. RESULTS: The calibration dataset included 24 studies (4378 participants, 652 major depression cases); the validation dataset 17 studies (4252 participants, 568 cases). In the validation dataset, optimal cut-offs of the unidimensional, two-dimensional, and bi-factor models had higher sensitivity (by 0.036, 0.050, 0.049 points, respectively) but lower specificity (0.017, 0.026, 0.019, respectively) compared to the sum score cut-off of ⩾10. CONCLUSIONS: In a comprehensive dataset of diagnostic studies, scoring using complex latent variable models do not improve screening accuracy of the PHQ-9 meaningfully as compared to the simple sum score approach. Cambridge University Press 2022-11 2021-02-22 /pmc/articles/PMC9393567/ /pubmed/33612144 http://dx.doi.org/10.1017/S0033291721000131 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by-nc-sa/4.0/This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike licence (http://creativecommons.org/licenses/by-nc-sa/4.0), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the same Creative Commons licence is used to distribute the re-used or adapted article and the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use. |
spellingShingle | Original Article Fischer, Felix Levis, Brooke Falk, Carl Sun, Ying Ioannidis, John P. A. Cuijpers, Pim Shrier, Ian Benedetti, Andrea Thombs, Brett D. Comparison of different scoring methods based on latent variable models of the PHQ-9: an individual participant data meta-analysis |
title | Comparison of different scoring methods based on latent variable models of the PHQ-9: an individual participant data meta-analysis |
title_full | Comparison of different scoring methods based on latent variable models of the PHQ-9: an individual participant data meta-analysis |
title_fullStr | Comparison of different scoring methods based on latent variable models of the PHQ-9: an individual participant data meta-analysis |
title_full_unstemmed | Comparison of different scoring methods based on latent variable models of the PHQ-9: an individual participant data meta-analysis |
title_short | Comparison of different scoring methods based on latent variable models of the PHQ-9: an individual participant data meta-analysis |
title_sort | comparison of different scoring methods based on latent variable models of the phq-9: an individual participant data meta-analysis |
topic | Original Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9393567/ https://www.ncbi.nlm.nih.gov/pubmed/33612144 http://dx.doi.org/10.1017/S0033291721000131 |
work_keys_str_mv | AT fischerfelix comparisonofdifferentscoringmethodsbasedonlatentvariablemodelsofthephq9anindividualparticipantdatametaanalysis AT levisbrooke comparisonofdifferentscoringmethodsbasedonlatentvariablemodelsofthephq9anindividualparticipantdatametaanalysis AT falkcarl comparisonofdifferentscoringmethodsbasedonlatentvariablemodelsofthephq9anindividualparticipantdatametaanalysis AT sunying comparisonofdifferentscoringmethodsbasedonlatentvariablemodelsofthephq9anindividualparticipantdatametaanalysis AT ioannidisjohnpa comparisonofdifferentscoringmethodsbasedonlatentvariablemodelsofthephq9anindividualparticipantdatametaanalysis AT cuijperspim comparisonofdifferentscoringmethodsbasedonlatentvariablemodelsofthephq9anindividualparticipantdatametaanalysis AT shrierian comparisonofdifferentscoringmethodsbasedonlatentvariablemodelsofthephq9anindividualparticipantdatametaanalysis AT benedettiandrea comparisonofdifferentscoringmethodsbasedonlatentvariablemodelsofthephq9anindividualparticipantdatametaanalysis AT thombsbrettd comparisonofdifferentscoringmethodsbasedonlatentvariablemodelsofthephq9anindividualparticipantdatametaanalysis AT comparisonofdifferentscoringmethodsbasedonlatentvariablemodelsofthephq9anindividualparticipantdatametaanalysis |