Cargando…
Measuring metacognitive performance: type 1 performance dependence and test-retest reliability
Research on metacognition—thinking about thinking—has grown rapidly and fostered our understanding of human cognition in healthy individuals and clinical populations. Of central importance is the concept of metacognitive performance, which characterizes the capacity of an individual to estimate and...
Autor principal: | |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Oxford University Press
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8633424/ https://www.ncbi.nlm.nih.gov/pubmed/34858637 http://dx.doi.org/10.1093/nc/niab040 |
_version_ | 1784607925550972928 |
---|---|
author | Guggenmos, Matthias |
author_facet | Guggenmos, Matthias |
author_sort | Guggenmos, Matthias |
collection | PubMed |
description | Research on metacognition—thinking about thinking—has grown rapidly and fostered our understanding of human cognition in healthy individuals and clinical populations. Of central importance is the concept of metacognitive performance, which characterizes the capacity of an individual to estimate and report the accuracy of primary (type 1) cognitive processes or actions ensuing from these processes. Arguably one of the biggest challenges for measures of metacognitive performance is their dependency on objective type 1 performance, although more recent methods aim to address this issue. The present work scrutinizes the most popular metacognitive performance measures in terms of two critical characteristics: independence of type 1 performance and test-retest reliability. Analyses of data from the Confidence Database (total N = 6912) indicate that no current metacognitive performance measure is independent of type 1 performance. The shape of this dependency is largely reproduced by extending current models of metacognition with a source of metacognitive noise. Moreover, the reliability of metacognitive performance measures is highly sensitive to the combination of type 1 performance and trial number. Importantly, trial numbers frequently employed in metacognition research are too low to achieve an acceptable level of test-retest reliability. Among common task characteristics, simultaneous choice and confidence reports most strongly improved reliability. Finally, general recommendations about design choices and analytical remedies for studies investigating metacognitive performance are provided. |
format | Online Article Text |
id | pubmed-8633424 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Oxford University Press |
record_format | MEDLINE/PubMed |
spelling | pubmed-86334242021-12-01 Measuring metacognitive performance: type 1 performance dependence and test-retest reliability Guggenmos, Matthias Neurosci Conscious Research Article Research on metacognition—thinking about thinking—has grown rapidly and fostered our understanding of human cognition in healthy individuals and clinical populations. Of central importance is the concept of metacognitive performance, which characterizes the capacity of an individual to estimate and report the accuracy of primary (type 1) cognitive processes or actions ensuing from these processes. Arguably one of the biggest challenges for measures of metacognitive performance is their dependency on objective type 1 performance, although more recent methods aim to address this issue. The present work scrutinizes the most popular metacognitive performance measures in terms of two critical characteristics: independence of type 1 performance and test-retest reliability. Analyses of data from the Confidence Database (total N = 6912) indicate that no current metacognitive performance measure is independent of type 1 performance. The shape of this dependency is largely reproduced by extending current models of metacognition with a source of metacognitive noise. Moreover, the reliability of metacognitive performance measures is highly sensitive to the combination of type 1 performance and trial number. Importantly, trial numbers frequently employed in metacognition research are too low to achieve an acceptable level of test-retest reliability. Among common task characteristics, simultaneous choice and confidence reports most strongly improved reliability. Finally, general recommendations about design choices and analytical remedies for studies investigating metacognitive performance are provided. Oxford University Press 2021-11-25 /pmc/articles/PMC8633424/ /pubmed/34858637 http://dx.doi.org/10.1093/nc/niab040 Text en © The Author(s) 2021. Published by Oxford University Press. https://creativecommons.org/licenses/by/4.0/This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Research Article Guggenmos, Matthias Measuring metacognitive performance: type 1 performance dependence and test-retest reliability |
title | Measuring metacognitive performance: type 1 performance dependence and test-retest reliability |
title_full | Measuring metacognitive performance: type 1 performance dependence and test-retest reliability |
title_fullStr | Measuring metacognitive performance: type 1 performance dependence and test-retest reliability |
title_full_unstemmed | Measuring metacognitive performance: type 1 performance dependence and test-retest reliability |
title_short | Measuring metacognitive performance: type 1 performance dependence and test-retest reliability |
title_sort | measuring metacognitive performance: type 1 performance dependence and test-retest reliability |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8633424/ https://www.ncbi.nlm.nih.gov/pubmed/34858637 http://dx.doi.org/10.1093/nc/niab040 |
work_keys_str_mv | AT guggenmosmatthias measuringmetacognitiveperformancetype1performancedependenceandtestretestreliability |