Cargando…

Quantifying machine influence over human forecasters

Crowdsourcing human forecasts and machine learning models each show promise in predicting future geopolitical outcomes. Crowdsourcing increases accuracy by pooling knowledge, which mitigates individual errors. On the other hand, advances in machine learning have led to machine models that increase a...

Descripción completa

Detalles Bibliográficos
Autores principales: Abeliuk, Andrés, Benjamin, Daniel M., Morstatter, Fred, Galstyan, Aram
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7524768/
https://www.ncbi.nlm.nih.gov/pubmed/32994447
http://dx.doi.org/10.1038/s41598-020-72690-4
_version_ 1783588611619291136
author Abeliuk, Andrés
Benjamin, Daniel M.
Morstatter, Fred
Galstyan, Aram
author_facet Abeliuk, Andrés
Benjamin, Daniel M.
Morstatter, Fred
Galstyan, Aram
author_sort Abeliuk, Andrés
collection PubMed
description Crowdsourcing human forecasts and machine learning models each show promise in predicting future geopolitical outcomes. Crowdsourcing increases accuracy by pooling knowledge, which mitigates individual errors. On the other hand, advances in machine learning have led to machine models that increase accuracy due to their ability to parameterize and adapt to changing environments. To capitalize on the unique advantages of each method, recent efforts have shown improvements by “hybridizing” forecasts—pairing human forecasters with machine models. This study analyzes the effectiveness of such a hybrid system. In a perfect world, independent reasoning by the forecasters combined with the analytic capabilities of the machine models should complement each other to arrive at an ultimately more accurate forecast. However, well-documented biases describe how humans often mistrust and under-utilize such models in their forecasts. In this work, we present a model that can be used to estimate the trust that humans assign to a machine. We use forecasts made in the absence of machine models as prior beliefs to quantify the weights placed on the models. Our model can be used to uncover other aspects of forecasters’ decision-making processes. We find that forecasters trust the model rarely, in a pattern that suggests they treat machine models similarly to expert advisors, but only the best forecasters trust the models when they can be expected to perform well. We also find that forecasters tend to choose models that conform to their prior beliefs as opposed to anchoring on the model forecast. Our results suggest machine models can improve the judgment of a human pool but highlight the importance of accounting for trust and cognitive biases involved in the human judgment process.
format Online
Article
Text
id pubmed-7524768
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-75247682020-10-01 Quantifying machine influence over human forecasters Abeliuk, Andrés Benjamin, Daniel M. Morstatter, Fred Galstyan, Aram Sci Rep Article Crowdsourcing human forecasts and machine learning models each show promise in predicting future geopolitical outcomes. Crowdsourcing increases accuracy by pooling knowledge, which mitigates individual errors. On the other hand, advances in machine learning have led to machine models that increase accuracy due to their ability to parameterize and adapt to changing environments. To capitalize on the unique advantages of each method, recent efforts have shown improvements by “hybridizing” forecasts—pairing human forecasters with machine models. This study analyzes the effectiveness of such a hybrid system. In a perfect world, independent reasoning by the forecasters combined with the analytic capabilities of the machine models should complement each other to arrive at an ultimately more accurate forecast. However, well-documented biases describe how humans often mistrust and under-utilize such models in their forecasts. In this work, we present a model that can be used to estimate the trust that humans assign to a machine. We use forecasts made in the absence of machine models as prior beliefs to quantify the weights placed on the models. Our model can be used to uncover other aspects of forecasters’ decision-making processes. We find that forecasters trust the model rarely, in a pattern that suggests they treat machine models similarly to expert advisors, but only the best forecasters trust the models when they can be expected to perform well. We also find that forecasters tend to choose models that conform to their prior beliefs as opposed to anchoring on the model forecast. Our results suggest machine models can improve the judgment of a human pool but highlight the importance of accounting for trust and cognitive biases involved in the human judgment process. Nature Publishing Group UK 2020-09-29 /pmc/articles/PMC7524768/ /pubmed/32994447 http://dx.doi.org/10.1038/s41598-020-72690-4 Text en © The Author(s) 2020 Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
spellingShingle Article
Abeliuk, Andrés
Benjamin, Daniel M.
Morstatter, Fred
Galstyan, Aram
Quantifying machine influence over human forecasters
title Quantifying machine influence over human forecasters
title_full Quantifying machine influence over human forecasters
title_fullStr Quantifying machine influence over human forecasters
title_full_unstemmed Quantifying machine influence over human forecasters
title_short Quantifying machine influence over human forecasters
title_sort quantifying machine influence over human forecasters
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7524768/
https://www.ncbi.nlm.nih.gov/pubmed/32994447
http://dx.doi.org/10.1038/s41598-020-72690-4
work_keys_str_mv AT abeliukandres quantifyingmachineinfluenceoverhumanforecasters
AT benjamindanielm quantifyingmachineinfluenceoverhumanforecasters
AT morstatterfred quantifyingmachineinfluenceoverhumanforecasters
AT galstyanaram quantifyingmachineinfluenceoverhumanforecasters