Cargando…

Implications for Electronic Surveys in Inpatient Settings Based on Patient Survey Response Patterns: Cross-Sectional Study

BACKGROUND:  Surveys of hospitalized patients are important for research and learning about unobservable medical issues (eg, mental health, quality of life, and symptoms), but there has been little work examining survey data quality in this population whose capacity to respond to survey items may di...

Descripción completa

Detalles Bibliográficos
Autores principales: Gregory, Megan E, Sova, Lindsey N, Huerta, Timothy R, McAlearney, Ann Scheck
Formato: Online Artículo Texto
Lenguaje:English
Publicado: JMIR Publications 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10652193/
https://www.ncbi.nlm.nih.gov/pubmed/37910163
http://dx.doi.org/10.2196/48236
_version_ 1785136157980360704
author Gregory, Megan E
Sova, Lindsey N
Huerta, Timothy R
McAlearney, Ann Scheck
author_facet Gregory, Megan E
Sova, Lindsey N
Huerta, Timothy R
McAlearney, Ann Scheck
author_sort Gregory, Megan E
collection PubMed
description BACKGROUND:  Surveys of hospitalized patients are important for research and learning about unobservable medical issues (eg, mental health, quality of life, and symptoms), but there has been little work examining survey data quality in this population whose capacity to respond to survey items may differ from the general population. OBJECTIVE:  The aim of this study is to determine what factors drive response rates, survey drop-offs, and missing data in surveys of hospitalized patients. METHODS:  Cross-sectional surveys were distributed on an inpatient tablet to patients in a large, midwestern US hospital. Three versions were tested: 1 with 174 items and 2 with 111 items; one 111-item version had missing item reminders that prompted participants when they did not answer items. Response rate, drop-off rate (abandoning survey before completion), and item missingness (skipping items) were examined to investigate data quality. Chi-square tests, Kaplan-Meyer survival curves, and distribution charts were used to compare data quality among survey versions. Response duration was computed for each version. RESULTS: Overall, 2981 patients responded. Response rate did not differ between the 174- and 111-item versions (81.7% vs 83%, P=.53). Drop-off was significantly reduced when the survey was shortened (65.7% vs 20.2% of participants dropped off, P<.001). Approximately one-quarter of participants dropped off by item 120, with over half dropping off by item 158. The percentage of participants with missing data decreased substantially when missing item reminders were added (77.2% vs 31.7% of participants, P<.001). The mean percentage of items with missing data was reduced in the shorter survey (40.7% vs 20.3% of items missing); with missing item reminders, the percentage of items with missing data was further reduced (20.3% vs 11.7% of items missing). Across versions, for the median participant, each item added 24.6 seconds to a survey’s duration. CONCLUSIONS:  Hospitalized patients may have a higher tolerance for longer surveys than the general population, but surveys given to hospitalized patients should have a maximum of 120 items to ensure high rates of completion. Missing item prompts should be used to reduce missing data. Future research should examine generalizability to nonhospitalized individuals.
format Online
Article
Text
id pubmed-10652193
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher JMIR Publications
record_format MEDLINE/PubMed
spelling pubmed-106521932023-11-01 Implications for Electronic Surveys in Inpatient Settings Based on Patient Survey Response Patterns: Cross-Sectional Study Gregory, Megan E Sova, Lindsey N Huerta, Timothy R McAlearney, Ann Scheck J Med Internet Res Original Paper BACKGROUND:  Surveys of hospitalized patients are important for research and learning about unobservable medical issues (eg, mental health, quality of life, and symptoms), but there has been little work examining survey data quality in this population whose capacity to respond to survey items may differ from the general population. OBJECTIVE:  The aim of this study is to determine what factors drive response rates, survey drop-offs, and missing data in surveys of hospitalized patients. METHODS:  Cross-sectional surveys were distributed on an inpatient tablet to patients in a large, midwestern US hospital. Three versions were tested: 1 with 174 items and 2 with 111 items; one 111-item version had missing item reminders that prompted participants when they did not answer items. Response rate, drop-off rate (abandoning survey before completion), and item missingness (skipping items) were examined to investigate data quality. Chi-square tests, Kaplan-Meyer survival curves, and distribution charts were used to compare data quality among survey versions. Response duration was computed for each version. RESULTS: Overall, 2981 patients responded. Response rate did not differ between the 174- and 111-item versions (81.7% vs 83%, P=.53). Drop-off was significantly reduced when the survey was shortened (65.7% vs 20.2% of participants dropped off, P<.001). Approximately one-quarter of participants dropped off by item 120, with over half dropping off by item 158. The percentage of participants with missing data decreased substantially when missing item reminders were added (77.2% vs 31.7% of participants, P<.001). The mean percentage of items with missing data was reduced in the shorter survey (40.7% vs 20.3% of items missing); with missing item reminders, the percentage of items with missing data was further reduced (20.3% vs 11.7% of items missing). Across versions, for the median participant, each item added 24.6 seconds to a survey’s duration. CONCLUSIONS:  Hospitalized patients may have a higher tolerance for longer surveys than the general population, but surveys given to hospitalized patients should have a maximum of 120 items to ensure high rates of completion. Missing item prompts should be used to reduce missing data. Future research should examine generalizability to nonhospitalized individuals. JMIR Publications 2023-11-01 /pmc/articles/PMC10652193/ /pubmed/37910163 http://dx.doi.org/10.2196/48236 Text en ©Megan E Gregory, Lindsey N Sova, Timothy R Huerta, Ann Scheck McAlearney. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 01.11.2023. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.
spellingShingle Original Paper
Gregory, Megan E
Sova, Lindsey N
Huerta, Timothy R
McAlearney, Ann Scheck
Implications for Electronic Surveys in Inpatient Settings Based on Patient Survey Response Patterns: Cross-Sectional Study
title Implications for Electronic Surveys in Inpatient Settings Based on Patient Survey Response Patterns: Cross-Sectional Study
title_full Implications for Electronic Surveys in Inpatient Settings Based on Patient Survey Response Patterns: Cross-Sectional Study
title_fullStr Implications for Electronic Surveys in Inpatient Settings Based on Patient Survey Response Patterns: Cross-Sectional Study
title_full_unstemmed Implications for Electronic Surveys in Inpatient Settings Based on Patient Survey Response Patterns: Cross-Sectional Study
title_short Implications for Electronic Surveys in Inpatient Settings Based on Patient Survey Response Patterns: Cross-Sectional Study
title_sort implications for electronic surveys in inpatient settings based on patient survey response patterns: cross-sectional study
topic Original Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10652193/
https://www.ncbi.nlm.nih.gov/pubmed/37910163
http://dx.doi.org/10.2196/48236
work_keys_str_mv AT gregorymegane implicationsforelectronicsurveysininpatientsettingsbasedonpatientsurveyresponsepatternscrosssectionalstudy
AT sovalindseyn implicationsforelectronicsurveysininpatientsettingsbasedonpatientsurveyresponsepatternscrosssectionalstudy
AT huertatimothyr implicationsforelectronicsurveysininpatientsettingsbasedonpatientsurveyresponsepatternscrosssectionalstudy
AT mcalearneyannscheck implicationsforelectronicsurveysininpatientsettingsbasedonpatientsurveyresponsepatternscrosssectionalstudy