Cargando…

System Usability Scale Benchmarking for Digital Health Apps: Meta-analysis

BACKGROUND: The System Usability Scale (SUS) is a widely used scale that has been used to quantify the usability of many software and hardware products. However, the SUS was not specifically designed to evaluate mobile apps, or in particular digital health apps (DHAs). OBJECTIVE: The aim of this stu...

Descripción completa

Detalles Bibliográficos
Autores principales: Hyzy, Maciej, Bond, Raymond, Mulvenna, Maurice, Bai, Lu, Dix, Alan, Leigh, Simon, Hunt, Sophie
Formato: Online Artículo Texto
Lenguaje:English
Publicado: JMIR Publications 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9437782/
https://www.ncbi.nlm.nih.gov/pubmed/35980732
http://dx.doi.org/10.2196/37290
_version_ 1784781690898481152
author Hyzy, Maciej
Bond, Raymond
Mulvenna, Maurice
Bai, Lu
Dix, Alan
Leigh, Simon
Hunt, Sophie
author_facet Hyzy, Maciej
Bond, Raymond
Mulvenna, Maurice
Bai, Lu
Dix, Alan
Leigh, Simon
Hunt, Sophie
author_sort Hyzy, Maciej
collection PubMed
description BACKGROUND: The System Usability Scale (SUS) is a widely used scale that has been used to quantify the usability of many software and hardware products. However, the SUS was not specifically designed to evaluate mobile apps, or in particular digital health apps (DHAs). OBJECTIVE: The aim of this study was to examine whether the widely used SUS distribution for benchmarking (mean 68, SD 12.5) can be used to reliably assess the usability of DHAs. METHODS: A search of the literature was performed using the ACM Digital Library, IEEE Xplore, CORE, PubMed, and Google Scholar databases to identify SUS scores related to the usability of DHAs for meta-analysis. This study included papers that published the SUS scores of the evaluated DHAs from 2011 to 2021 to get a 10-year representation. In total, 117 SUS scores for 114 DHAs were identified. R Studio and the R programming language were used to model the DHA SUS distribution, with a 1-sample, 2-tailed t test used to compare this distribution with the standard SUS distribution. RESULTS: The mean SUS score when all the collected apps were included was 76.64 (SD 15.12); however, this distribution exhibited asymmetrical skewness (–0.52) and was not normally distributed according to Shapiro-Wilk test (P=.002). The mean SUS score for “physical activity” apps was 83.28 (SD 12.39) and drove the skewness. Hence, the mean SUS score for all collected apps excluding “physical activity” apps was 68.05 (SD 14.05). A 1-sample, 2-tailed t test indicated that this health app SUS distribution was not statistically significantly different from the standard SUS distribution (P=.98). CONCLUSIONS: This study concludes that the SUS and the widely accepted benchmark of a mean SUS score of 68 (SD 12.5) are suitable for evaluating the usability of DHAs. We speculate as to why physical activity apps received higher SUS scores than expected. A template for reporting mean SUS scores to facilitate meta-analysis is proposed, together with future work that could be done to further examine the SUS benchmark scores for DHAs.
format Online
Article
Text
id pubmed-9437782
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher JMIR Publications
record_format MEDLINE/PubMed
spelling pubmed-94377822022-09-03 System Usability Scale Benchmarking for Digital Health Apps: Meta-analysis Hyzy, Maciej Bond, Raymond Mulvenna, Maurice Bai, Lu Dix, Alan Leigh, Simon Hunt, Sophie JMIR Mhealth Uhealth Original Paper BACKGROUND: The System Usability Scale (SUS) is a widely used scale that has been used to quantify the usability of many software and hardware products. However, the SUS was not specifically designed to evaluate mobile apps, or in particular digital health apps (DHAs). OBJECTIVE: The aim of this study was to examine whether the widely used SUS distribution for benchmarking (mean 68, SD 12.5) can be used to reliably assess the usability of DHAs. METHODS: A search of the literature was performed using the ACM Digital Library, IEEE Xplore, CORE, PubMed, and Google Scholar databases to identify SUS scores related to the usability of DHAs for meta-analysis. This study included papers that published the SUS scores of the evaluated DHAs from 2011 to 2021 to get a 10-year representation. In total, 117 SUS scores for 114 DHAs were identified. R Studio and the R programming language were used to model the DHA SUS distribution, with a 1-sample, 2-tailed t test used to compare this distribution with the standard SUS distribution. RESULTS: The mean SUS score when all the collected apps were included was 76.64 (SD 15.12); however, this distribution exhibited asymmetrical skewness (–0.52) and was not normally distributed according to Shapiro-Wilk test (P=.002). The mean SUS score for “physical activity” apps was 83.28 (SD 12.39) and drove the skewness. Hence, the mean SUS score for all collected apps excluding “physical activity” apps was 68.05 (SD 14.05). A 1-sample, 2-tailed t test indicated that this health app SUS distribution was not statistically significantly different from the standard SUS distribution (P=.98). CONCLUSIONS: This study concludes that the SUS and the widely accepted benchmark of a mean SUS score of 68 (SD 12.5) are suitable for evaluating the usability of DHAs. We speculate as to why physical activity apps received higher SUS scores than expected. A template for reporting mean SUS scores to facilitate meta-analysis is proposed, together with future work that could be done to further examine the SUS benchmark scores for DHAs. JMIR Publications 2022-08-18 /pmc/articles/PMC9437782/ /pubmed/35980732 http://dx.doi.org/10.2196/37290 Text en ©Maciej Hyzy, Raymond Bond, Maurice Mulvenna, Lu Bai, Alan Dix, Simon Leigh, Sophie Hunt. Originally published in JMIR mHealth and uHealth (https://mhealth.jmir.org), 18.08.2022. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR mHealth and uHealth, is properly cited. The complete bibliographic information, a link to the original publication on https://mhealth.jmir.org/, as well as this copyright and license information must be included.
spellingShingle Original Paper
Hyzy, Maciej
Bond, Raymond
Mulvenna, Maurice
Bai, Lu
Dix, Alan
Leigh, Simon
Hunt, Sophie
System Usability Scale Benchmarking for Digital Health Apps: Meta-analysis
title System Usability Scale Benchmarking for Digital Health Apps: Meta-analysis
title_full System Usability Scale Benchmarking for Digital Health Apps: Meta-analysis
title_fullStr System Usability Scale Benchmarking for Digital Health Apps: Meta-analysis
title_full_unstemmed System Usability Scale Benchmarking for Digital Health Apps: Meta-analysis
title_short System Usability Scale Benchmarking for Digital Health Apps: Meta-analysis
title_sort system usability scale benchmarking for digital health apps: meta-analysis
topic Original Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9437782/
https://www.ncbi.nlm.nih.gov/pubmed/35980732
http://dx.doi.org/10.2196/37290
work_keys_str_mv AT hyzymaciej systemusabilityscalebenchmarkingfordigitalhealthappsmetaanalysis
AT bondraymond systemusabilityscalebenchmarkingfordigitalhealthappsmetaanalysis
AT mulvennamaurice systemusabilityscalebenchmarkingfordigitalhealthappsmetaanalysis
AT bailu systemusabilityscalebenchmarkingfordigitalhealthappsmetaanalysis
AT dixalan systemusabilityscalebenchmarkingfordigitalhealthappsmetaanalysis
AT leighsimon systemusabilityscalebenchmarkingfordigitalhealthappsmetaanalysis
AT huntsophie systemusabilityscalebenchmarkingfordigitalhealthappsmetaanalysis