Cargando…

The reproducibility of psychiatric evaluations of work disability: two reliability and agreement studies

BACKGROUND: Expert psychiatrists conducting work disability evaluations often disagree on work capacity (WC) when assessing the same patient. More structured and standardised evaluations focusing on function could improve agreement. The RELY studies aimed to establish the inter-rater reproducibility...

Descripción completa

Detalles Bibliográficos
Autores principales: Kunz, Regina, von Allmen, David Y., Marelli, Renato, Hoffmann-Richter, Ulrike, Jeger, Joerg, Mager, Ralph, Colomb, Etienne, Schaad, Heinz J., Bachmann, Monica, Vogel, Nicole, Busse, Jason W., Eichhorn, Martin, Bänziger, Oskar, Zumbrunn, Thomas, de Boer, Wout E. L., Fischer, Katrin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6607597/
https://www.ncbi.nlm.nih.gov/pubmed/31266488
http://dx.doi.org/10.1186/s12888-019-2171-y
Descripción
Sumario:BACKGROUND: Expert psychiatrists conducting work disability evaluations often disagree on work capacity (WC) when assessing the same patient. More structured and standardised evaluations focusing on function could improve agreement. The RELY studies aimed to establish the inter-rater reproducibility (reliability and agreement) of ‘functional evaluations’ in patients with mental disorders applying for disability benefits and to compare the effect of limited versus intensive expert training on reproducibility. METHODS: We performed two multi-centre reproducibility studies on standardised functional WC evaluation (RELY 1 and 2). Trained psychiatrists interviewed 30 and 40 patients respectively and determined WC using the Instrument for Functional Assessment in Psychiatry (IFAP). Three psychiatrists per patient estimated WC from videotaped evaluations. We analysed reliability (intraclass correlation coefficients [ICC]) and agreement (‘standard error of measurement’ [SEM] and proportions of comparisons within prespecified limits) between expert evaluations of WC. Our primary outcome was WC in alternative work (WC(alternative.work)), 100–0%. Secondary outcomes were WC in last job (WC(last.job)), 100–0%; patients’ perceived fairness of the evaluation, 10–0, higher is better; usefulness to psychiatrists. RESULTS: Inter-rater reliability for WC(alternative.work) was fair in RELY 1 (ICC 0.43; 95%CI 0.22–0.60) and RELY 2 (ICC 0.44; 0.25–0.59). Agreement was low in both studies, the ‘standard error of measurement’ for WC(alternative.work) was 24.6 percentage points (20.9–28.4) and 19.4 (16.9–22.0) respectively. Using a ‘maximum acceptable difference’ of 25 percentage points WC(alternative.work) between two experts, 61.6% of comparisons in RELY 1, and 73.6% of comparisons in RELY 2 fell within these limits. Post-hoc secondary analysis for RELY 2 versus RELY 1 showed a significant change in SEM(alternative.work) (− 5.2 percentage points WC(alternative.work) [95%CI − 9.7 to − 0.6]), and in the proportions on the differences ≤ 25 percentage points WC(alternative.work) between two experts (p = 0.008). Patients perceived the functional evaluation as fair (RELY 1: mean 8.0; RELY 2: 9.4), psychiatrists as useful. CONCLUSIONS: Evidence from non-randomised studies suggests that intensive training in functional evaluation may increase agreement on WC between experts, but fell short to reach stakeholders’ expectations. It did not alter reliability. Isolated efforts in training psychiatrists may not suffice to reach the expected level of agreement. A societal discussion about achievable goals and readiness to consider procedural changes in WC evaluations may deserve considerations. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.1186/s12888-019-2171-y) contains supplementary material, which is available to authorized users.