Cargando…
Lying on the Dissection Table: Anatomizing Faked Responses
Research has shown that even experts cannot detect faking above chance, but recent studies have suggested that machine learning may help in this endeavor. However, faking differs between faking conditions, previous efforts have not taken these differences into account, and faking indices have yet to...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer US
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9729128/ https://www.ncbi.nlm.nih.gov/pubmed/35132586 http://dx.doi.org/10.3758/s13428-021-01770-8 |
_version_ | 1784845417029042176 |
---|---|
author | Röhner, Jessica Thoss, Philipp Schütz, Astrid |
author_facet | Röhner, Jessica Thoss, Philipp Schütz, Astrid |
author_sort | Röhner, Jessica |
collection | PubMed |
description | Research has shown that even experts cannot detect faking above chance, but recent studies have suggested that machine learning may help in this endeavor. However, faking differs between faking conditions, previous efforts have not taken these differences into account, and faking indices have yet to be integrated into such approaches. We reanalyzed seven data sets (N = 1,039) with various faking conditions (high and low scores, different constructs, naïve and informed faking, faking with and without practice, different measures [self-reports vs. implicit association tests; IATs]). We investigated the extent to which and how machine learning classifiers could detect faking under these conditions and compared different input data (response patterns, scores, faking indices) and different classifiers (logistic regression, random forest, XGBoost). We also explored the features that classifiers used for detection. Our results show that machine learning has the potential to detect faking, but detection success varies between conditions from chance levels to 100%. There were differences in detection (e.g., detecting low-score faking was better than detecting high-score faking). For self-reports, response patterns and scores were comparable with regard to faking detection, whereas for IATs, faking indices and response patterns were superior to scores. Logistic regression and random forest worked about equally well and outperformed XGBoost. In most cases, classifiers used more than one feature (faking occurred over different pathways), and the features varied in their relevance. Our research supports the assumption of different faking processes and explains why detecting faking is a complex endeavor. |
format | Online Article Text |
id | pubmed-9729128 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Springer US |
record_format | MEDLINE/PubMed |
spelling | pubmed-97291282022-12-09 Lying on the Dissection Table: Anatomizing Faked Responses Röhner, Jessica Thoss, Philipp Schütz, Astrid Behav Res Methods Article Research has shown that even experts cannot detect faking above chance, but recent studies have suggested that machine learning may help in this endeavor. However, faking differs between faking conditions, previous efforts have not taken these differences into account, and faking indices have yet to be integrated into such approaches. We reanalyzed seven data sets (N = 1,039) with various faking conditions (high and low scores, different constructs, naïve and informed faking, faking with and without practice, different measures [self-reports vs. implicit association tests; IATs]). We investigated the extent to which and how machine learning classifiers could detect faking under these conditions and compared different input data (response patterns, scores, faking indices) and different classifiers (logistic regression, random forest, XGBoost). We also explored the features that classifiers used for detection. Our results show that machine learning has the potential to detect faking, but detection success varies between conditions from chance levels to 100%. There were differences in detection (e.g., detecting low-score faking was better than detecting high-score faking). For self-reports, response patterns and scores were comparable with regard to faking detection, whereas for IATs, faking indices and response patterns were superior to scores. Logistic regression and random forest worked about equally well and outperformed XGBoost. In most cases, classifiers used more than one feature (faking occurred over different pathways), and the features varied in their relevance. Our research supports the assumption of different faking processes and explains why detecting faking is a complex endeavor. Springer US 2022-02-07 2022 /pmc/articles/PMC9729128/ /pubmed/35132586 http://dx.doi.org/10.3758/s13428-021-01770-8 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Röhner, Jessica Thoss, Philipp Schütz, Astrid Lying on the Dissection Table: Anatomizing Faked Responses |
title | Lying on the Dissection Table: Anatomizing Faked Responses |
title_full | Lying on the Dissection Table: Anatomizing Faked Responses |
title_fullStr | Lying on the Dissection Table: Anatomizing Faked Responses |
title_full_unstemmed | Lying on the Dissection Table: Anatomizing Faked Responses |
title_short | Lying on the Dissection Table: Anatomizing Faked Responses |
title_sort | lying on the dissection table: anatomizing faked responses |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9729128/ https://www.ncbi.nlm.nih.gov/pubmed/35132586 http://dx.doi.org/10.3758/s13428-021-01770-8 |
work_keys_str_mv | AT rohnerjessica lyingonthedissectiontableanatomizingfakedresponses AT thossphilipp lyingonthedissectiontableanatomizingfakedresponses AT schutzastrid lyingonthedissectiontableanatomizingfakedresponses |