Cargando…

Correcting Judgment Correctives in National Security Intelligence

Intelligence analysts, like other professionals, form norms that define standards of tradecraft excellence. These norms, however, have evolved in an idiosyncratic manner that reflects the influence of prominent insiders who had keen psychological insights but little appreciation for how to translate...

Descripción completa

Detalles Bibliográficos
Autores principales: Mandel, David R., Tetlock, Philip E.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6309046/
https://www.ncbi.nlm.nih.gov/pubmed/30622501
http://dx.doi.org/10.3389/fpsyg.2018.02640
_version_ 1783383327667912704
author Mandel, David R.
Tetlock, Philip E.
author_facet Mandel, David R.
Tetlock, Philip E.
author_sort Mandel, David R.
collection PubMed
description Intelligence analysts, like other professionals, form norms that define standards of tradecraft excellence. These norms, however, have evolved in an idiosyncratic manner that reflects the influence of prominent insiders who had keen psychological insights but little appreciation for how to translate those insights into testable hypotheses. The net result is that the prevailing tradecraft norms of best practice are only loosely grounded in the science of judgment and decision-making. The “common sense” of prestigious opinion leaders inside the intelligence community has pre-empted systematic validity testing of the training techniques and judgment aids endorsed by those opinion leaders. Drawing on the scientific literature, we advance hypotheses about how current best practices could well be reducing rather than increasing the quality of analytic products. One set of hypotheses pertain to the failure of tradecraft training to recognize the most basic threat to accuracy: measurement error in the interpretation of the same data and in the communication of interpretations. Another set of hypotheses focuses on the insensitivity of tradecraft training to the risk that issuing broad-brush, one-directional warnings against bias (e.g., over-confidence) will be less likely to encourage self-critical, deliberative cognition than simple response-threshold shifting that yields the mirror-image bias (e.g., under-confidence). Given the magnitude of the consequences of better and worse intelligence analysis flowing to policy-makers, we see a compelling case for greater funding of efforts to test what actually works.
format Online
Article
Text
id pubmed-6309046
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-63090462019-01-08 Correcting Judgment Correctives in National Security Intelligence Mandel, David R. Tetlock, Philip E. Front Psychol Psychology Intelligence analysts, like other professionals, form norms that define standards of tradecraft excellence. These norms, however, have evolved in an idiosyncratic manner that reflects the influence of prominent insiders who had keen psychological insights but little appreciation for how to translate those insights into testable hypotheses. The net result is that the prevailing tradecraft norms of best practice are only loosely grounded in the science of judgment and decision-making. The “common sense” of prestigious opinion leaders inside the intelligence community has pre-empted systematic validity testing of the training techniques and judgment aids endorsed by those opinion leaders. Drawing on the scientific literature, we advance hypotheses about how current best practices could well be reducing rather than increasing the quality of analytic products. One set of hypotheses pertain to the failure of tradecraft training to recognize the most basic threat to accuracy: measurement error in the interpretation of the same data and in the communication of interpretations. Another set of hypotheses focuses on the insensitivity of tradecraft training to the risk that issuing broad-brush, one-directional warnings against bias (e.g., over-confidence) will be less likely to encourage self-critical, deliberative cognition than simple response-threshold shifting that yields the mirror-image bias (e.g., under-confidence). Given the magnitude of the consequences of better and worse intelligence analysis flowing to policy-makers, we see a compelling case for greater funding of efforts to test what actually works. Frontiers Media S.A. 2018-12-21 /pmc/articles/PMC6309046/ /pubmed/30622501 http://dx.doi.org/10.3389/fpsyg.2018.02640 Text en Copyright © 2018 Mandel and Tetlock. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Psychology
Mandel, David R.
Tetlock, Philip E.
Correcting Judgment Correctives in National Security Intelligence
title Correcting Judgment Correctives in National Security Intelligence
title_full Correcting Judgment Correctives in National Security Intelligence
title_fullStr Correcting Judgment Correctives in National Security Intelligence
title_full_unstemmed Correcting Judgment Correctives in National Security Intelligence
title_short Correcting Judgment Correctives in National Security Intelligence
title_sort correcting judgment correctives in national security intelligence
topic Psychology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6309046/
https://www.ncbi.nlm.nih.gov/pubmed/30622501
http://dx.doi.org/10.3389/fpsyg.2018.02640
work_keys_str_mv AT mandeldavidr correctingjudgmentcorrectivesinnationalsecurityintelligence
AT tetlockphilipe correctingjudgmentcorrectivesinnationalsecurityintelligence