Cargando…
Algorithm-based advice taking and clinical judgement: impact of advice distance and algorithm information
Evidence-based algorithms can improve both lay and professional judgements and decisions, yet they remain underutilised. Research on advice taking established that humans tend to discount advice—especially when it contradicts their own judgement (“egocentric advice discounting”)—but this can be miti...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer International Publishing
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9329504/ https://www.ncbi.nlm.nih.gov/pubmed/35895185 http://dx.doi.org/10.1186/s41235-022-00421-6 |
_version_ | 1784757931691999232 |
---|---|
author | Pálfi, Bence Arora, Kavleen Kostopoulou, Olga |
author_facet | Pálfi, Bence Arora, Kavleen Kostopoulou, Olga |
author_sort | Pálfi, Bence |
collection | PubMed |
description | Evidence-based algorithms can improve both lay and professional judgements and decisions, yet they remain underutilised. Research on advice taking established that humans tend to discount advice—especially when it contradicts their own judgement (“egocentric advice discounting”)—but this can be mitigated by knowledge about the advisor’s past performance. Advice discounting has typically been investigated using tasks with outcomes of low importance (e.g. general knowledge questions) and students as participants. Using the judge-advisor framework, we tested whether the principles of advice discounting apply in the clinical domain. We used realistic patient scenarios, algorithmic advice from a validated cancer risk calculator, and general practitioners (GPs) as participants. GPs could update their risk estimates after receiving algorithmic advice. Half of them received information about the algorithm’s derivation, validation, and accuracy. We measured weight of advice and found that, on average, GPs weighed their estimates and the algorithm equally—but not always: they retained their initial estimates 29% of the time, and fully updated them 27% of the time. Updating did not depend on whether GPs were informed about the algorithm. We found a weak negative quadratic relationship between estimate updating and advice distance: although GPs integrate algorithmic advice on average, they may somewhat discount it, if it is very different from their own estimate. These results present a more complex picture than simple egocentric discounting of advice. They cast a more optimistic view of advice taking, where experts weigh algorithmic advice and their own judgement equally and move towards the advice even when it contradicts their own initial estimates. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s41235-022-00421-6. |
format | Online Article Text |
id | pubmed-9329504 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Springer International Publishing |
record_format | MEDLINE/PubMed |
spelling | pubmed-93295042022-07-29 Algorithm-based advice taking and clinical judgement: impact of advice distance and algorithm information Pálfi, Bence Arora, Kavleen Kostopoulou, Olga Cogn Res Princ Implic Original Article Evidence-based algorithms can improve both lay and professional judgements and decisions, yet they remain underutilised. Research on advice taking established that humans tend to discount advice—especially when it contradicts their own judgement (“egocentric advice discounting”)—but this can be mitigated by knowledge about the advisor’s past performance. Advice discounting has typically been investigated using tasks with outcomes of low importance (e.g. general knowledge questions) and students as participants. Using the judge-advisor framework, we tested whether the principles of advice discounting apply in the clinical domain. We used realistic patient scenarios, algorithmic advice from a validated cancer risk calculator, and general practitioners (GPs) as participants. GPs could update their risk estimates after receiving algorithmic advice. Half of them received information about the algorithm’s derivation, validation, and accuracy. We measured weight of advice and found that, on average, GPs weighed their estimates and the algorithm equally—but not always: they retained their initial estimates 29% of the time, and fully updated them 27% of the time. Updating did not depend on whether GPs were informed about the algorithm. We found a weak negative quadratic relationship between estimate updating and advice distance: although GPs integrate algorithmic advice on average, they may somewhat discount it, if it is very different from their own estimate. These results present a more complex picture than simple egocentric discounting of advice. They cast a more optimistic view of advice taking, where experts weigh algorithmic advice and their own judgement equally and move towards the advice even when it contradicts their own initial estimates. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s41235-022-00421-6. Springer International Publishing 2022-07-27 /pmc/articles/PMC9329504/ /pubmed/35895185 http://dx.doi.org/10.1186/s41235-022-00421-6 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Original Article Pálfi, Bence Arora, Kavleen Kostopoulou, Olga Algorithm-based advice taking and clinical judgement: impact of advice distance and algorithm information |
title | Algorithm-based advice taking and clinical judgement: impact of advice distance and algorithm information |
title_full | Algorithm-based advice taking and clinical judgement: impact of advice distance and algorithm information |
title_fullStr | Algorithm-based advice taking and clinical judgement: impact of advice distance and algorithm information |
title_full_unstemmed | Algorithm-based advice taking and clinical judgement: impact of advice distance and algorithm information |
title_short | Algorithm-based advice taking and clinical judgement: impact of advice distance and algorithm information |
title_sort | algorithm-based advice taking and clinical judgement: impact of advice distance and algorithm information |
topic | Original Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9329504/ https://www.ncbi.nlm.nih.gov/pubmed/35895185 http://dx.doi.org/10.1186/s41235-022-00421-6 |
work_keys_str_mv | AT palfibence algorithmbasedadvicetakingandclinicaljudgementimpactofadvicedistanceandalgorithminformation AT arorakavleen algorithmbasedadvicetakingandclinicaljudgementimpactofadvicedistanceandalgorithminformation AT kostopoulouolga algorithmbasedadvicetakingandclinicaljudgementimpactofadvicedistanceandalgorithminformation |