Cargando…

Scaling up the evaluation of psychotherapy: evaluating motivational interviewing fidelity via statistical text classification

BACKGROUND: Behavioral interventions such as psychotherapy are leading, evidence-based practices for a variety of problems (e.g., substance abuse), but the evaluation of provider fidelity to behavioral interventions is limited by the need for human judgment. The current study evaluated the accuracy...

Descripción completa

Detalles Bibliográficos
Autores principales: Atkins, David C, Steyvers, Mark, Imel, Zac E, Smyth, Padhraic
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2014
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4026152/
https://www.ncbi.nlm.nih.gov/pubmed/24758152
http://dx.doi.org/10.1186/1748-5908-9-49
_version_ 1782316815423111168
author Atkins, David C
Steyvers, Mark
Imel, Zac E
Smyth, Padhraic
author_facet Atkins, David C
Steyvers, Mark
Imel, Zac E
Smyth, Padhraic
author_sort Atkins, David C
collection PubMed
description BACKGROUND: Behavioral interventions such as psychotherapy are leading, evidence-based practices for a variety of problems (e.g., substance abuse), but the evaluation of provider fidelity to behavioral interventions is limited by the need for human judgment. The current study evaluated the accuracy of statistical text classification in replicating human-based judgments of provider fidelity in one specific psychotherapy—motivational interviewing (MI). METHOD: Participants (n = 148) came from five previously conducted randomized trials and were either primary care patients at a safety-net hospital or university students. To be eligible for the original studies, participants met criteria for either problematic drug or alcohol use. All participants received a type of brief motivational interview, an evidence-based intervention for alcohol and substance use disorders. The Motivational Interviewing Skills Code is a standard measure of MI provider fidelity based on human ratings that was used to evaluate all therapy sessions. A text classification approach called a labeled topic model was used to learn associations between human-based fidelity ratings and MI session transcripts. It was then used to generate codes for new sessions. The primary comparison was the accuracy of model-based codes with human-based codes. RESULTS: Receiver operating characteristic (ROC) analyses of model-based codes showed reasonably strong sensitivity and specificity with those from human raters (range of area under ROC curve (AUC) scores: 0.62 – 0.81; average AUC: 0.72). Agreement with human raters was evaluated based on talk turns as well as code tallies for an entire session. Generated codes had higher reliability with human codes for session tallies and also varied strongly by individual code. CONCLUSION: To scale up the evaluation of behavioral interventions, technological solutions will be required. The current study demonstrated preliminary, encouraging findings regarding the utility of statistical text classification in bridging this methodological gap.
format Online
Article
Text
id pubmed-4026152
institution National Center for Biotechnology Information
language English
publishDate 2014
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-40261522014-05-20 Scaling up the evaluation of psychotherapy: evaluating motivational interviewing fidelity via statistical text classification Atkins, David C Steyvers, Mark Imel, Zac E Smyth, Padhraic Implement Sci Methodology BACKGROUND: Behavioral interventions such as psychotherapy are leading, evidence-based practices for a variety of problems (e.g., substance abuse), but the evaluation of provider fidelity to behavioral interventions is limited by the need for human judgment. The current study evaluated the accuracy of statistical text classification in replicating human-based judgments of provider fidelity in one specific psychotherapy—motivational interviewing (MI). METHOD: Participants (n = 148) came from five previously conducted randomized trials and were either primary care patients at a safety-net hospital or university students. To be eligible for the original studies, participants met criteria for either problematic drug or alcohol use. All participants received a type of brief motivational interview, an evidence-based intervention for alcohol and substance use disorders. The Motivational Interviewing Skills Code is a standard measure of MI provider fidelity based on human ratings that was used to evaluate all therapy sessions. A text classification approach called a labeled topic model was used to learn associations between human-based fidelity ratings and MI session transcripts. It was then used to generate codes for new sessions. The primary comparison was the accuracy of model-based codes with human-based codes. RESULTS: Receiver operating characteristic (ROC) analyses of model-based codes showed reasonably strong sensitivity and specificity with those from human raters (range of area under ROC curve (AUC) scores: 0.62 – 0.81; average AUC: 0.72). Agreement with human raters was evaluated based on talk turns as well as code tallies for an entire session. Generated codes had higher reliability with human codes for session tallies and also varied strongly by individual code. CONCLUSION: To scale up the evaluation of behavioral interventions, technological solutions will be required. The current study demonstrated preliminary, encouraging findings regarding the utility of statistical text classification in bridging this methodological gap. BioMed Central 2014-04-24 /pmc/articles/PMC4026152/ /pubmed/24758152 http://dx.doi.org/10.1186/1748-5908-9-49 Text en Copyright © 2014 Atkins et al.; licensee BioMed Central Ltd. http://creativecommons.org/licenses/by/2.0 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
spellingShingle Methodology
Atkins, David C
Steyvers, Mark
Imel, Zac E
Smyth, Padhraic
Scaling up the evaluation of psychotherapy: evaluating motivational interviewing fidelity via statistical text classification
title Scaling up the evaluation of psychotherapy: evaluating motivational interviewing fidelity via statistical text classification
title_full Scaling up the evaluation of psychotherapy: evaluating motivational interviewing fidelity via statistical text classification
title_fullStr Scaling up the evaluation of psychotherapy: evaluating motivational interviewing fidelity via statistical text classification
title_full_unstemmed Scaling up the evaluation of psychotherapy: evaluating motivational interviewing fidelity via statistical text classification
title_short Scaling up the evaluation of psychotherapy: evaluating motivational interviewing fidelity via statistical text classification
title_sort scaling up the evaluation of psychotherapy: evaluating motivational interviewing fidelity via statistical text classification
topic Methodology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4026152/
https://www.ncbi.nlm.nih.gov/pubmed/24758152
http://dx.doi.org/10.1186/1748-5908-9-49
work_keys_str_mv AT atkinsdavidc scalinguptheevaluationofpsychotherapyevaluatingmotivationalinterviewingfidelityviastatisticaltextclassification
AT steyversmark scalinguptheevaluationofpsychotherapyevaluatingmotivationalinterviewingfidelityviastatisticaltextclassification
AT imelzace scalinguptheevaluationofpsychotherapyevaluatingmotivationalinterviewingfidelityviastatisticaltextclassification
AT smythpadhraic scalinguptheevaluationofpsychotherapyevaluatingmotivationalinterviewingfidelityviastatisticaltextclassification