Cargando…

Internet Versus Mailed Questionnaires: A Randomized Comparison

BACKGROUND: The use of Internet-based questionnaires for collection of data to evaluate patient education and other interventions has increased in recent years. Many self-report instruments have been validated using paper-and-pencil versions, but we cannot assume that the psychometric properties of...

Descripción completa

Detalles Bibliográficos
Autores principales: Ritter, Philip, Lorig, Kate, Laurent, Diana, Matthews, Katy
Formato: Texto
Lenguaje:English
Publicado: Gunther Eysenbach 2004
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1550608/
https://www.ncbi.nlm.nih.gov/pubmed/15471755
http://dx.doi.org/10.2196/jmir.6.3.e29
_version_ 1782129245466656768
author Ritter, Philip
Lorig, Kate
Laurent, Diana
Matthews, Katy
author_facet Ritter, Philip
Lorig, Kate
Laurent, Diana
Matthews, Katy
author_sort Ritter, Philip
collection PubMed
description BACKGROUND: The use of Internet-based questionnaires for collection of data to evaluate patient education and other interventions has increased in recent years. Many self-report instruments have been validated using paper-and-pencil versions, but we cannot assume that the psychometric properties of an Internet-based version will be identical. OBJECTIVES: To look at similarities and differences between the Internet versions and the paper-and-pencil versions of 16 existing self-report instruments useful in evaluation of patient interventions. METHODS: Participants were recruited via the Internet and volunteered to participate (N=397), after which they were randomly assigned to fill out questionnaires online or via mailed paper-and-pencil versions. The self-report instruments measured were overall health, health distress, practice mental stress management, Health Assessment Questionnaire (HAQ) disability, illness intrusiveness, activity limitations, visual numeric for pain, visual numeric for shortness of breath, visual numeric for fatigue, self-efficacy for managing disease, aerobic exercise, stretching and strengthening exercise, visits to MD, hospitalizations, hospital days, and emergency room visits. Means, ranges, and confidence intervals are given for each instrument within each type of questionnaire. The results from the two questionnaires were compared using both parametric and non-parametric tests. Reliability tests were given for multi-item instruments. A separate sample (N=30) filled out identical questionnaires over the Internet within a few days and correlations were used to assess test-retest reliability. RESULTS: Out of 16 instruments, none showed significant differences when the appropriate tests were used. Construct reliability was similar within each type of questionnaire, and Internet test-retest reliability was high. Internet questionnaires required less follow-up to achieve a slightly (non-significant) higher completion rate compared to mailed questionnaires. CONCLUSIONS: Among a convenience sample recruited via the Internet, results from those randomly assigned to Internet participation were at least as good as, if not better than, among those assigned mailed questionnaires, with less recruitment effort required. The instruments administered via the Internet appear to be reliable, and to be answered similarly to the way they are answered when they are administered via traditional mailed paper questionnaires.
format Text
id pubmed-1550608
institution National Center for Biotechnology Information
language English
publishDate 2004
publisher Gunther Eysenbach
record_format MEDLINE/PubMed
spelling pubmed-15506082006-10-13 Internet Versus Mailed Questionnaires: A Randomized Comparison Ritter, Philip Lorig, Kate Laurent, Diana Matthews, Katy J Med Internet Res Original Paper BACKGROUND: The use of Internet-based questionnaires for collection of data to evaluate patient education and other interventions has increased in recent years. Many self-report instruments have been validated using paper-and-pencil versions, but we cannot assume that the psychometric properties of an Internet-based version will be identical. OBJECTIVES: To look at similarities and differences between the Internet versions and the paper-and-pencil versions of 16 existing self-report instruments useful in evaluation of patient interventions. METHODS: Participants were recruited via the Internet and volunteered to participate (N=397), after which they were randomly assigned to fill out questionnaires online or via mailed paper-and-pencil versions. The self-report instruments measured were overall health, health distress, practice mental stress management, Health Assessment Questionnaire (HAQ) disability, illness intrusiveness, activity limitations, visual numeric for pain, visual numeric for shortness of breath, visual numeric for fatigue, self-efficacy for managing disease, aerobic exercise, stretching and strengthening exercise, visits to MD, hospitalizations, hospital days, and emergency room visits. Means, ranges, and confidence intervals are given for each instrument within each type of questionnaire. The results from the two questionnaires were compared using both parametric and non-parametric tests. Reliability tests were given for multi-item instruments. A separate sample (N=30) filled out identical questionnaires over the Internet within a few days and correlations were used to assess test-retest reliability. RESULTS: Out of 16 instruments, none showed significant differences when the appropriate tests were used. Construct reliability was similar within each type of questionnaire, and Internet test-retest reliability was high. Internet questionnaires required less follow-up to achieve a slightly (non-significant) higher completion rate compared to mailed questionnaires. CONCLUSIONS: Among a convenience sample recruited via the Internet, results from those randomly assigned to Internet participation were at least as good as, if not better than, among those assigned mailed questionnaires, with less recruitment effort required. The instruments administered via the Internet appear to be reliable, and to be answered similarly to the way they are answered when they are administered via traditional mailed paper questionnaires. Gunther Eysenbach 2004-09-15 /pmc/articles/PMC1550608/ /pubmed/15471755 http://dx.doi.org/10.2196/jmir.6.3.e29 Text en © Philip Ritter, Kate Lorig, Diana Laurent, Katy Matthews. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 15.9.2004. Except where otherwise noted, articles published in the Journal of Medical Internet Research are distributed under the terms of the Creative Commons Attribution License (http://www.creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited, including full bibliographic details and the URL (see "please cite as" above), and this statement is included.
spellingShingle Original Paper
Ritter, Philip
Lorig, Kate
Laurent, Diana
Matthews, Katy
Internet Versus Mailed Questionnaires: A Randomized Comparison
title Internet Versus Mailed Questionnaires: A Randomized Comparison
title_full Internet Versus Mailed Questionnaires: A Randomized Comparison
title_fullStr Internet Versus Mailed Questionnaires: A Randomized Comparison
title_full_unstemmed Internet Versus Mailed Questionnaires: A Randomized Comparison
title_short Internet Versus Mailed Questionnaires: A Randomized Comparison
title_sort internet versus mailed questionnaires: a randomized comparison
topic Original Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1550608/
https://www.ncbi.nlm.nih.gov/pubmed/15471755
http://dx.doi.org/10.2196/jmir.6.3.e29
work_keys_str_mv AT ritterphilip internetversusmailedquestionnairesarandomizedcomparison
AT lorigkate internetversusmailedquestionnairesarandomizedcomparison
AT laurentdiana internetversusmailedquestionnairesarandomizedcomparison
AT matthewskaty internetversusmailedquestionnairesarandomizedcomparison