Cargando…
In randomization we trust? There are overlooked problems in experimenting with people in behavioral intervention trials()
OBJECTIVES: Behavioral intervention trials may be susceptible to poorly understood forms of bias stemming from research participation. This article considers how assessment and other prerandomization research activities may introduce bias that is not fully prevented by randomization. STUDY DESIGN AN...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Elsevier
2014
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3969092/ https://www.ncbi.nlm.nih.gov/pubmed/24314401 http://dx.doi.org/10.1016/j.jclinepi.2013.09.004 |
_version_ | 1782309230186856448 |
---|---|
author | McCambridge, Jim Kypri, Kypros Elbourne, Diana |
author_facet | McCambridge, Jim Kypri, Kypros Elbourne, Diana |
author_sort | McCambridge, Jim |
collection | PubMed |
description | OBJECTIVES: Behavioral intervention trials may be susceptible to poorly understood forms of bias stemming from research participation. This article considers how assessment and other prerandomization research activities may introduce bias that is not fully prevented by randomization. STUDY DESIGN AND SETTING: This is a hypothesis-generating discussion article. RESULTS: An additivity assumption underlying conventional thinking in trial design and analysis is problematic in behavioral intervention trials. Postrandomization sources of bias are somewhat better known within the clinical epidemiological and trials literatures. Neglect of attention to possible research participation effects means that unintended participant behavior change stemming from artifacts of the research process has unknown potential to bias estimates of behavioral intervention effects. CONCLUSION: Studies are needed to evaluate how research participation effects are introduced, and we make suggestions for how research in this area may be taken forward, including how these issues may be addressed in the design and conduct of trials. It is proposed that attention to possible research participation effects can improve the design of trials evaluating behavioral and other interventions and inform the interpretation of existing evidence. |
format | Online Article Text |
id | pubmed-3969092 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2014 |
publisher | Elsevier |
record_format | MEDLINE/PubMed |
spelling | pubmed-39690922014-03-31 In randomization we trust? There are overlooked problems in experimenting with people in behavioral intervention trials() McCambridge, Jim Kypri, Kypros Elbourne, Diana J Clin Epidemiol Commentary OBJECTIVES: Behavioral intervention trials may be susceptible to poorly understood forms of bias stemming from research participation. This article considers how assessment and other prerandomization research activities may introduce bias that is not fully prevented by randomization. STUDY DESIGN AND SETTING: This is a hypothesis-generating discussion article. RESULTS: An additivity assumption underlying conventional thinking in trial design and analysis is problematic in behavioral intervention trials. Postrandomization sources of bias are somewhat better known within the clinical epidemiological and trials literatures. Neglect of attention to possible research participation effects means that unintended participant behavior change stemming from artifacts of the research process has unknown potential to bias estimates of behavioral intervention effects. CONCLUSION: Studies are needed to evaluate how research participation effects are introduced, and we make suggestions for how research in this area may be taken forward, including how these issues may be addressed in the design and conduct of trials. It is proposed that attention to possible research participation effects can improve the design of trials evaluating behavioral and other interventions and inform the interpretation of existing evidence. Elsevier 2014-03 /pmc/articles/PMC3969092/ /pubmed/24314401 http://dx.doi.org/10.1016/j.jclinepi.2013.09.004 Text en © 2014 The Authors http://creativecommons.org/licenses/by/3.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Commentary McCambridge, Jim Kypri, Kypros Elbourne, Diana In randomization we trust? There are overlooked problems in experimenting with people in behavioral intervention trials() |
title | In randomization we trust? There are overlooked problems in experimenting with people in behavioral intervention trials() |
title_full | In randomization we trust? There are overlooked problems in experimenting with people in behavioral intervention trials() |
title_fullStr | In randomization we trust? There are overlooked problems in experimenting with people in behavioral intervention trials() |
title_full_unstemmed | In randomization we trust? There are overlooked problems in experimenting with people in behavioral intervention trials() |
title_short | In randomization we trust? There are overlooked problems in experimenting with people in behavioral intervention trials() |
title_sort | in randomization we trust? there are overlooked problems in experimenting with people in behavioral intervention trials() |
topic | Commentary |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3969092/ https://www.ncbi.nlm.nih.gov/pubmed/24314401 http://dx.doi.org/10.1016/j.jclinepi.2013.09.004 |
work_keys_str_mv | AT mccambridgejim inrandomizationwetrustthereareoverlookedproblemsinexperimentingwithpeopleinbehavioralinterventiontrials AT kyprikypros inrandomizationwetrustthereareoverlookedproblemsinexperimentingwithpeopleinbehavioralinterventiontrials AT elbournediana inrandomizationwetrustthereareoverlookedproblemsinexperimentingwithpeopleinbehavioralinterventiontrials |