Cargando…

Exploring User Learnability and Learning Performance in an App for Depression: Usability Study

BACKGROUND: Mental health apps tend to be narrow in their functioning, with their focus mostly being on tracking, management, or psychoeducation. It is unclear what capability such apps have to facilitate a change in users, particularly in terms of learning key constructs relating to behavioral inte...

Descripción completa

Detalles Bibliográficos
Autores principales: Stiles-Shields, Colleen, Montague, Enid, Lattie, Emily G, Schueller, Stephen M, Kwasny, Mary J, Mohr, David C
Formato: Online Artículo Texto
Lenguaje:English
Publicado: JMIR Publications 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5573426/
https://www.ncbi.nlm.nih.gov/pubmed/28801301
http://dx.doi.org/10.2196/humanfactors.7951
_version_ 1783259658386931712
author Stiles-Shields, Colleen
Montague, Enid
Lattie, Emily G
Schueller, Stephen M
Kwasny, Mary J
Mohr, David C
author_facet Stiles-Shields, Colleen
Montague, Enid
Lattie, Emily G
Schueller, Stephen M
Kwasny, Mary J
Mohr, David C
author_sort Stiles-Shields, Colleen
collection PubMed
description BACKGROUND: Mental health apps tend to be narrow in their functioning, with their focus mostly being on tracking, management, or psychoeducation. It is unclear what capability such apps have to facilitate a change in users, particularly in terms of learning key constructs relating to behavioral interventions. Thought Challenger (CBITs, Chicago) is a skill-building app that engages users in cognitive restructuring, a core component of cognitive therapy (CT) for depression. OBJECTIVE: The purpose of this study was to evaluate the learnability and learning performance of users following initial use of Thought Challenger. METHODS: Twenty adults completed in-lab usability testing of Thought Challenger, which comprised two interactions with the app. Learnability was measured via completion times, error rates, and psychologist ratings of user entries in the app; learning performance was measured via a test of CT knowledge and skills. Nonparametric tests were conducted to evaluate the difference between individuals with no or mild depression to those with moderate to severe depression, as well as differences in completion times and pre- and posttests. RESULTS: Across the two interactions, the majority of completion times were found to be acceptable (5 min or less), with minimal errors (1.2%, 10/840) and successful completion of CT thought records. Furthermore, CT knowledge and skills significantly improved after the initial use of Thought Challenger (P=.009). CONCLUSIONS: The learning objectives for Thought Challenger during initial uses were successfully met in an evaluation with likely end users. The findings therefore suggest that apps are capable of providing users with opportunities for learning of intervention skills.
format Online
Article
Text
id pubmed-5573426
institution National Center for Biotechnology Information
language English
publishDate 2017
publisher JMIR Publications
record_format MEDLINE/PubMed
spelling pubmed-55734262017-09-07 Exploring User Learnability and Learning Performance in an App for Depression: Usability Study Stiles-Shields, Colleen Montague, Enid Lattie, Emily G Schueller, Stephen M Kwasny, Mary J Mohr, David C JMIR Hum Factors Original Paper BACKGROUND: Mental health apps tend to be narrow in their functioning, with their focus mostly being on tracking, management, or psychoeducation. It is unclear what capability such apps have to facilitate a change in users, particularly in terms of learning key constructs relating to behavioral interventions. Thought Challenger (CBITs, Chicago) is a skill-building app that engages users in cognitive restructuring, a core component of cognitive therapy (CT) for depression. OBJECTIVE: The purpose of this study was to evaluate the learnability and learning performance of users following initial use of Thought Challenger. METHODS: Twenty adults completed in-lab usability testing of Thought Challenger, which comprised two interactions with the app. Learnability was measured via completion times, error rates, and psychologist ratings of user entries in the app; learning performance was measured via a test of CT knowledge and skills. Nonparametric tests were conducted to evaluate the difference between individuals with no or mild depression to those with moderate to severe depression, as well as differences in completion times and pre- and posttests. RESULTS: Across the two interactions, the majority of completion times were found to be acceptable (5 min or less), with minimal errors (1.2%, 10/840) and successful completion of CT thought records. Furthermore, CT knowledge and skills significantly improved after the initial use of Thought Challenger (P=.009). CONCLUSIONS: The learning objectives for Thought Challenger during initial uses were successfully met in an evaluation with likely end users. The findings therefore suggest that apps are capable of providing users with opportunities for learning of intervention skills. JMIR Publications 2017-08-11 /pmc/articles/PMC5573426/ /pubmed/28801301 http://dx.doi.org/10.2196/humanfactors.7951 Text en ©Colleen Stiles-Shields, Enid Montague, Emily G Lattie, Stephen M Schueller, Mary J Kwasny, David C Mohr. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 11.08.2017. https://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Human Factors, is properly cited. The complete bibliographic information, a link to the original publication on http://humanfactors.jmir.org, as well as this copyright and license information must be included.
spellingShingle Original Paper
Stiles-Shields, Colleen
Montague, Enid
Lattie, Emily G
Schueller, Stephen M
Kwasny, Mary J
Mohr, David C
Exploring User Learnability and Learning Performance in an App for Depression: Usability Study
title Exploring User Learnability and Learning Performance in an App for Depression: Usability Study
title_full Exploring User Learnability and Learning Performance in an App for Depression: Usability Study
title_fullStr Exploring User Learnability and Learning Performance in an App for Depression: Usability Study
title_full_unstemmed Exploring User Learnability and Learning Performance in an App for Depression: Usability Study
title_short Exploring User Learnability and Learning Performance in an App for Depression: Usability Study
title_sort exploring user learnability and learning performance in an app for depression: usability study
topic Original Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5573426/
https://www.ncbi.nlm.nih.gov/pubmed/28801301
http://dx.doi.org/10.2196/humanfactors.7951
work_keys_str_mv AT stilesshieldscolleen exploringuserlearnabilityandlearningperformanceinanappfordepressionusabilitystudy
AT montagueenid exploringuserlearnabilityandlearningperformanceinanappfordepressionusabilitystudy
AT lattieemilyg exploringuserlearnabilityandlearningperformanceinanappfordepressionusabilitystudy
AT schuellerstephenm exploringuserlearnabilityandlearningperformanceinanappfordepressionusabilitystudy
AT kwasnymaryj exploringuserlearnabilityandlearningperformanceinanappfordepressionusabilitystudy
AT mohrdavidc exploringuserlearnabilityandlearningperformanceinanappfordepressionusabilitystudy