Cargando…

More is not always better: An experimental individual-level validation of the randomized response technique and the crosswise model

Social desirability and the fear of sanctions can deter survey respondents from responding truthfully to sensitive questions. Self-reports on norm breaking behavior such as shoplifting, non-voting, or tax evasion may thus be subject to considerable misreporting. To mitigate such response bias, vario...

Descripción completa

Detalles Bibliográficos
Autores principales: Höglinger, Marc, Jann, Ben
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6091935/
https://www.ncbi.nlm.nih.gov/pubmed/30106973
http://dx.doi.org/10.1371/journal.pone.0201770
_version_ 1783347450154582016
author Höglinger, Marc
Jann, Ben
author_facet Höglinger, Marc
Jann, Ben
author_sort Höglinger, Marc
collection PubMed
description Social desirability and the fear of sanctions can deter survey respondents from responding truthfully to sensitive questions. Self-reports on norm breaking behavior such as shoplifting, non-voting, or tax evasion may thus be subject to considerable misreporting. To mitigate such response bias, various indirect question techniques, such as the randomized response technique (RRT), have been proposed. We evaluate the viability of several popular variants of the RRT, including the recently proposed crosswise-model RRT, by comparing respondents’ self-reports on cheating in dice games to actual cheating behavior, thereby distinguishing between false negatives (underreporting) and false positives (overreporting). The study has been implemented as an online survey on Amazon Mechanical Turk (N = 6, 505). Our results from two validation designs indicate that the forced-response RRT and the unrelated-question RRT, as implemented in our survey, fail to reduce the level of misreporting compared to conventional direct questioning. For the crosswise-model RRT we do observe a reduction of false negatives. At the same time, however, there is a non-ignorable increase in false positives; a flaw that previous evaluation studies relying on comparative or aggregate-level validation could not detect. Overall, none of the evaluated indirect techniques outperformed conventional direct questioning. Furthermore, our study demonstrates the importance of identifying false negatives as well as false positives to avoid false conclusions about the validity of indirect sensitive question techniques.
format Online
Article
Text
id pubmed-6091935
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-60919352018-08-30 More is not always better: An experimental individual-level validation of the randomized response technique and the crosswise model Höglinger, Marc Jann, Ben PLoS One Research Article Social desirability and the fear of sanctions can deter survey respondents from responding truthfully to sensitive questions. Self-reports on norm breaking behavior such as shoplifting, non-voting, or tax evasion may thus be subject to considerable misreporting. To mitigate such response bias, various indirect question techniques, such as the randomized response technique (RRT), have been proposed. We evaluate the viability of several popular variants of the RRT, including the recently proposed crosswise-model RRT, by comparing respondents’ self-reports on cheating in dice games to actual cheating behavior, thereby distinguishing between false negatives (underreporting) and false positives (overreporting). The study has been implemented as an online survey on Amazon Mechanical Turk (N = 6, 505). Our results from two validation designs indicate that the forced-response RRT and the unrelated-question RRT, as implemented in our survey, fail to reduce the level of misreporting compared to conventional direct questioning. For the crosswise-model RRT we do observe a reduction of false negatives. At the same time, however, there is a non-ignorable increase in false positives; a flaw that previous evaluation studies relying on comparative or aggregate-level validation could not detect. Overall, none of the evaluated indirect techniques outperformed conventional direct questioning. Furthermore, our study demonstrates the importance of identifying false negatives as well as false positives to avoid false conclusions about the validity of indirect sensitive question techniques. Public Library of Science 2018-08-14 /pmc/articles/PMC6091935/ /pubmed/30106973 http://dx.doi.org/10.1371/journal.pone.0201770 Text en © 2018 Höglinger, Jann http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Höglinger, Marc
Jann, Ben
More is not always better: An experimental individual-level validation of the randomized response technique and the crosswise model
title More is not always better: An experimental individual-level validation of the randomized response technique and the crosswise model
title_full More is not always better: An experimental individual-level validation of the randomized response technique and the crosswise model
title_fullStr More is not always better: An experimental individual-level validation of the randomized response technique and the crosswise model
title_full_unstemmed More is not always better: An experimental individual-level validation of the randomized response technique and the crosswise model
title_short More is not always better: An experimental individual-level validation of the randomized response technique and the crosswise model
title_sort more is not always better: an experimental individual-level validation of the randomized response technique and the crosswise model
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6091935/
https://www.ncbi.nlm.nih.gov/pubmed/30106973
http://dx.doi.org/10.1371/journal.pone.0201770
work_keys_str_mv AT hoglingermarc moreisnotalwaysbetteranexperimentalindividuallevelvalidationoftherandomizedresponsetechniqueandthecrosswisemodel
AT jannben moreisnotalwaysbetteranexperimentalindividuallevelvalidationoftherandomizedresponsetechniqueandthecrosswisemodel