Cargando…

Bot or Not? Detecting and Managing Participant Deception When Conducting Digital Research Remotely: Case Study of a Randomized Controlled Trial

BACKGROUND: Evaluating digital interventions using remote methods enables the recruitment of large numbers of participants relatively conveniently and cheaply compared with in-person methods. However, conducting research remotely based on participant self-report with little verification is open to a...

Descripción completa

Detalles Bibliográficos
Autores principales: Loebenberg, Gemma, Oldham, Melissa, Brown, Jamie, Dinu, Larisa, Michie, Susan, Field, Matt, Greaves, Felix, Garnett, Claire
Formato: Online Artículo Texto
Lenguaje:English
Publicado: JMIR Publications 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10540014/
https://www.ncbi.nlm.nih.gov/pubmed/37707943
http://dx.doi.org/10.2196/46523
_version_ 1785113630179590144
author Loebenberg, Gemma
Oldham, Melissa
Brown, Jamie
Dinu, Larisa
Michie, Susan
Field, Matt
Greaves, Felix
Garnett, Claire
author_facet Loebenberg, Gemma
Oldham, Melissa
Brown, Jamie
Dinu, Larisa
Michie, Susan
Field, Matt
Greaves, Felix
Garnett, Claire
author_sort Loebenberg, Gemma
collection PubMed
description BACKGROUND: Evaluating digital interventions using remote methods enables the recruitment of large numbers of participants relatively conveniently and cheaply compared with in-person methods. However, conducting research remotely based on participant self-report with little verification is open to automated “bots” and participant deception. OBJECTIVE: This paper uses a case study of a remotely conducted trial of an alcohol reduction app to highlight and discuss (1) the issues with participant deception affecting remote research trials with financial compensation; and (2) the importance of rigorous data management to detect and address these issues. METHODS: We recruited participants on the internet from July 2020 to March 2022 for a randomized controlled trial (n=5602) evaluating the effectiveness of an alcohol reduction app, Drink Less. Follow-up occurred at 3 time points, with financial compensation offered (up to £36 [US $39.23]). Address authentication and telephone verification were used to detect 2 kinds of deception: “bots,” that is, automated responses generated in clusters; and manual participant deception, that is, participants providing false information. RESULTS: Of the 1142 participants who enrolled in the first 2 months of recruitment, 75.6% (n=863) of them were identified as bots during data screening. As a result, a CAPTCHA (Completely Automated Public Turing Test to Tell Computers and Humans Apart) was added, and after this, no more bots were identified. Manual participant deception occurred throughout the study. Of the 5956 participants (excluding bots) who enrolled in the study, 298 (5%) were identified as false participants. The extent of this decreased from 110 in November 2020, to a negligible level by February 2022 including a number of months with 0. The decline occurred after we added further screening questions such as attention checks, removed the prominence of financial compensation from social media advertising, and added an additional requirement to provide a mobile phone number for identity verification. CONCLUSIONS: Data management protocols are necessary to detect automated bots and manual participant deception in remotely conducted trials. Bots and manual deception can be minimized by adding a CAPTCHA, attention checks, a requirement to provide a phone number for identity verification, and not prominently advertising financial compensation on social media. TRIAL REGISTRATION: ISRCTN Number ISRCTN64052601; https://doi.org/10.1186/ISRCTN64052601
format Online
Article
Text
id pubmed-10540014
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher JMIR Publications
record_format MEDLINE/PubMed
spelling pubmed-105400142023-09-30 Bot or Not? Detecting and Managing Participant Deception When Conducting Digital Research Remotely: Case Study of a Randomized Controlled Trial Loebenberg, Gemma Oldham, Melissa Brown, Jamie Dinu, Larisa Michie, Susan Field, Matt Greaves, Felix Garnett, Claire J Med Internet Res Original Paper BACKGROUND: Evaluating digital interventions using remote methods enables the recruitment of large numbers of participants relatively conveniently and cheaply compared with in-person methods. However, conducting research remotely based on participant self-report with little verification is open to automated “bots” and participant deception. OBJECTIVE: This paper uses a case study of a remotely conducted trial of an alcohol reduction app to highlight and discuss (1) the issues with participant deception affecting remote research trials with financial compensation; and (2) the importance of rigorous data management to detect and address these issues. METHODS: We recruited participants on the internet from July 2020 to March 2022 for a randomized controlled trial (n=5602) evaluating the effectiveness of an alcohol reduction app, Drink Less. Follow-up occurred at 3 time points, with financial compensation offered (up to £36 [US $39.23]). Address authentication and telephone verification were used to detect 2 kinds of deception: “bots,” that is, automated responses generated in clusters; and manual participant deception, that is, participants providing false information. RESULTS: Of the 1142 participants who enrolled in the first 2 months of recruitment, 75.6% (n=863) of them were identified as bots during data screening. As a result, a CAPTCHA (Completely Automated Public Turing Test to Tell Computers and Humans Apart) was added, and after this, no more bots were identified. Manual participant deception occurred throughout the study. Of the 5956 participants (excluding bots) who enrolled in the study, 298 (5%) were identified as false participants. The extent of this decreased from 110 in November 2020, to a negligible level by February 2022 including a number of months with 0. The decline occurred after we added further screening questions such as attention checks, removed the prominence of financial compensation from social media advertising, and added an additional requirement to provide a mobile phone number for identity verification. CONCLUSIONS: Data management protocols are necessary to detect automated bots and manual participant deception in remotely conducted trials. Bots and manual deception can be minimized by adding a CAPTCHA, attention checks, a requirement to provide a phone number for identity verification, and not prominently advertising financial compensation on social media. TRIAL REGISTRATION: ISRCTN Number ISRCTN64052601; https://doi.org/10.1186/ISRCTN64052601 JMIR Publications 2023-09-14 /pmc/articles/PMC10540014/ /pubmed/37707943 http://dx.doi.org/10.2196/46523 Text en ©Gemma Loebenberg, Melissa Oldham, Jamie Brown, Larisa Dinu, Susan Michie, Matt Field, Felix Greaves, Claire Garnett. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 14.09.2023. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.
spellingShingle Original Paper
Loebenberg, Gemma
Oldham, Melissa
Brown, Jamie
Dinu, Larisa
Michie, Susan
Field, Matt
Greaves, Felix
Garnett, Claire
Bot or Not? Detecting and Managing Participant Deception When Conducting Digital Research Remotely: Case Study of a Randomized Controlled Trial
title Bot or Not? Detecting and Managing Participant Deception When Conducting Digital Research Remotely: Case Study of a Randomized Controlled Trial
title_full Bot or Not? Detecting and Managing Participant Deception When Conducting Digital Research Remotely: Case Study of a Randomized Controlled Trial
title_fullStr Bot or Not? Detecting and Managing Participant Deception When Conducting Digital Research Remotely: Case Study of a Randomized Controlled Trial
title_full_unstemmed Bot or Not? Detecting and Managing Participant Deception When Conducting Digital Research Remotely: Case Study of a Randomized Controlled Trial
title_short Bot or Not? Detecting and Managing Participant Deception When Conducting Digital Research Remotely: Case Study of a Randomized Controlled Trial
title_sort bot or not? detecting and managing participant deception when conducting digital research remotely: case study of a randomized controlled trial
topic Original Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10540014/
https://www.ncbi.nlm.nih.gov/pubmed/37707943
http://dx.doi.org/10.2196/46523
work_keys_str_mv AT loebenberggemma botornotdetectingandmanagingparticipantdeceptionwhenconductingdigitalresearchremotelycasestudyofarandomizedcontrolledtrial
AT oldhammelissa botornotdetectingandmanagingparticipantdeceptionwhenconductingdigitalresearchremotelycasestudyofarandomizedcontrolledtrial
AT brownjamie botornotdetectingandmanagingparticipantdeceptionwhenconductingdigitalresearchremotelycasestudyofarandomizedcontrolledtrial
AT dinularisa botornotdetectingandmanagingparticipantdeceptionwhenconductingdigitalresearchremotelycasestudyofarandomizedcontrolledtrial
AT michiesusan botornotdetectingandmanagingparticipantdeceptionwhenconductingdigitalresearchremotelycasestudyofarandomizedcontrolledtrial
AT fieldmatt botornotdetectingandmanagingparticipantdeceptionwhenconductingdigitalresearchremotelycasestudyofarandomizedcontrolledtrial
AT greavesfelix botornotdetectingandmanagingparticipantdeceptionwhenconductingdigitalresearchremotelycasestudyofarandomizedcontrolledtrial
AT garnettclaire botornotdetectingandmanagingparticipantdeceptionwhenconductingdigitalresearchremotelycasestudyofarandomizedcontrolledtrial