Cargando…

Reliability of Health Information on the Internet: An Examination of Experts' Ratings

BACKGROUND: The use of medical experts in rating the content of health-related sites on the Internet has flourished in recent years. In this research, it has been common practice to use a single medical expert to rate the content of the Web sites. In many cases, the expert has rated the Internet hea...

Descripción completa

Detalles Bibliográficos
Autores principales: Craigie, Mark, Loader, Brian, Burrows, Roger, Muncer, Steven
Formato: Texto
Lenguaje:English
Publicado: Gunther Eysenbach 2002
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1761929/
https://www.ncbi.nlm.nih.gov/pubmed/11956034
http://dx.doi.org/10.2196/jmir.4.1.e2
_version_ 1782131513564856320
author Craigie, Mark
Loader, Brian
Burrows, Roger
Muncer, Steven
author_facet Craigie, Mark
Loader, Brian
Burrows, Roger
Muncer, Steven
author_sort Craigie, Mark
collection PubMed
description BACKGROUND: The use of medical experts in rating the content of health-related sites on the Internet has flourished in recent years. In this research, it has been common practice to use a single medical expert to rate the content of the Web sites. In many cases, the expert has rated the Internet health information as poor, and even potentially dangerous. However, one problem with this approach is that there is no guarantee that other medical experts will rate the sites in a similar manner. OBJECTIVES: The aim was to assess the reliability of medical experts' judgments of threads in an Internet newsgroup related to a common disease. A secondary aim was to show the limitations of commonly-used statistics for measuring reliability (eg, kappa). METHOD: The participants in this study were 5 medical doctors, who worked in a specialist unit dedicated to the treatment of the disease. They each rated the information contained in newsgroup threads using a 6-point scale designed by the experts themselves. Their ratings were analyzed for reliability using a number of statistics: Cohen's kappa, gamma, Kendall's W, and Cronbach's alpha. RESULTS: Reliability was absent for ratings of questions, and low for ratings of responses. The various measures of reliability used gave conflicting results. No measure produced high reliability. CONCLUSIONS: The medical experts showed a low agreement when rating the postings from the newsgroup. Hence, it is important to test inter-rater reliability in research assessing the accuracy and quality of health-related information on the Internet. A discussion of the different measures of agreement that could be used reveals that the choice of statistic can be problematic. It is therefore important to consider the assumptions underlying a measure of reliability before using it. Often, more than one measure will be needed for "triangulation" purposes.
format Text
id pubmed-1761929
institution National Center for Biotechnology Information
language English
publishDate 2002
publisher Gunther Eysenbach
record_format MEDLINE/PubMed
spelling pubmed-17619292007-01-03 Reliability of Health Information on the Internet: An Examination of Experts' Ratings Craigie, Mark Loader, Brian Burrows, Roger Muncer, Steven J Med Internet Res Original Paper BACKGROUND: The use of medical experts in rating the content of health-related sites on the Internet has flourished in recent years. In this research, it has been common practice to use a single medical expert to rate the content of the Web sites. In many cases, the expert has rated the Internet health information as poor, and even potentially dangerous. However, one problem with this approach is that there is no guarantee that other medical experts will rate the sites in a similar manner. OBJECTIVES: The aim was to assess the reliability of medical experts' judgments of threads in an Internet newsgroup related to a common disease. A secondary aim was to show the limitations of commonly-used statistics for measuring reliability (eg, kappa). METHOD: The participants in this study were 5 medical doctors, who worked in a specialist unit dedicated to the treatment of the disease. They each rated the information contained in newsgroup threads using a 6-point scale designed by the experts themselves. Their ratings were analyzed for reliability using a number of statistics: Cohen's kappa, gamma, Kendall's W, and Cronbach's alpha. RESULTS: Reliability was absent for ratings of questions, and low for ratings of responses. The various measures of reliability used gave conflicting results. No measure produced high reliability. CONCLUSIONS: The medical experts showed a low agreement when rating the postings from the newsgroup. Hence, it is important to test inter-rater reliability in research assessing the accuracy and quality of health-related information on the Internet. A discussion of the different measures of agreement that could be used reveals that the choice of statistic can be problematic. It is therefore important to consider the assumptions underlying a measure of reliability before using it. Often, more than one measure will be needed for "triangulation" purposes. Gunther Eysenbach 2002-01-17 /pmc/articles/PMC1761929/ /pubmed/11956034 http://dx.doi.org/10.2196/jmir.4.1.e2 Text en © Mark Craigie, Brian Loader, Roger Burrows, Steven Muncer. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 17.1.2002. Except where otherwise noted, articles published in the Journal of Medical Internet Research are distributed under the terms of the Creative Commons Attribution License (http://www.creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited, including full bibliographic details and the URL (see "please cite as" above), and this statement is included.
spellingShingle Original Paper
Craigie, Mark
Loader, Brian
Burrows, Roger
Muncer, Steven
Reliability of Health Information on the Internet: An Examination of Experts' Ratings
title Reliability of Health Information on the Internet: An Examination of Experts' Ratings
title_full Reliability of Health Information on the Internet: An Examination of Experts' Ratings
title_fullStr Reliability of Health Information on the Internet: An Examination of Experts' Ratings
title_full_unstemmed Reliability of Health Information on the Internet: An Examination of Experts' Ratings
title_short Reliability of Health Information on the Internet: An Examination of Experts' Ratings
title_sort reliability of health information on the internet: an examination of experts' ratings
topic Original Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1761929/
https://www.ncbi.nlm.nih.gov/pubmed/11956034
http://dx.doi.org/10.2196/jmir.4.1.e2
work_keys_str_mv AT craigiemark reliabilityofhealthinformationontheinternetanexaminationofexpertsratings
AT loaderbrian reliabilityofhealthinformationontheinternetanexaminationofexpertsratings
AT burrowsroger reliabilityofhealthinformationontheinternetanexaminationofexpertsratings
AT muncersteven reliabilityofhealthinformationontheinternetanexaminationofexpertsratings