Cargando…
Second opinion needed: communicating uncertainty in medical machine learning
There is great excitement that medical artificial intelligence (AI) based on machine learning (ML) can be used to improve decision making at the patient level in a variety of healthcare settings. However, the quantification and communication of uncertainty for individual predictions is often neglect...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7785732/ https://www.ncbi.nlm.nih.gov/pubmed/33402680 http://dx.doi.org/10.1038/s41746-020-00367-3 |
_version_ | 1783632483972022272 |
---|---|
author | Kompa, Benjamin Snoek, Jasper Beam, Andrew L. |
author_facet | Kompa, Benjamin Snoek, Jasper Beam, Andrew L. |
author_sort | Kompa, Benjamin |
collection | PubMed |
description | There is great excitement that medical artificial intelligence (AI) based on machine learning (ML) can be used to improve decision making at the patient level in a variety of healthcare settings. However, the quantification and communication of uncertainty for individual predictions is often neglected even though uncertainty estimates could lead to more principled decision-making and enable machine learning models to automatically or semi-automatically abstain on samples for which there is high uncertainty. In this article, we provide an overview of different approaches to uncertainty quantification and abstention for machine learning and highlight how these techniques could improve the safety and reliability of current ML systems being used in healthcare settings. Effective quantification and communication of uncertainty could help to engender trust with healthcare workers, while providing safeguards against known failure modes of current machine learning approaches. As machine learning becomes further integrated into healthcare environments, the ability to say “I’m not sure” or “I don’t know” when uncertain is a necessary capability to enable safe clinical deployment. |
format | Online Article Text |
id | pubmed-7785732 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-77857322021-01-14 Second opinion needed: communicating uncertainty in medical machine learning Kompa, Benjamin Snoek, Jasper Beam, Andrew L. NPJ Digit Med Perspective There is great excitement that medical artificial intelligence (AI) based on machine learning (ML) can be used to improve decision making at the patient level in a variety of healthcare settings. However, the quantification and communication of uncertainty for individual predictions is often neglected even though uncertainty estimates could lead to more principled decision-making and enable machine learning models to automatically or semi-automatically abstain on samples for which there is high uncertainty. In this article, we provide an overview of different approaches to uncertainty quantification and abstention for machine learning and highlight how these techniques could improve the safety and reliability of current ML systems being used in healthcare settings. Effective quantification and communication of uncertainty could help to engender trust with healthcare workers, while providing safeguards against known failure modes of current machine learning approaches. As machine learning becomes further integrated into healthcare environments, the ability to say “I’m not sure” or “I don’t know” when uncertain is a necessary capability to enable safe clinical deployment. Nature Publishing Group UK 2021-01-05 /pmc/articles/PMC7785732/ /pubmed/33402680 http://dx.doi.org/10.1038/s41746-020-00367-3 Text en © The Author(s) 2021 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. |
spellingShingle | Perspective Kompa, Benjamin Snoek, Jasper Beam, Andrew L. Second opinion needed: communicating uncertainty in medical machine learning |
title | Second opinion needed: communicating uncertainty in medical machine learning |
title_full | Second opinion needed: communicating uncertainty in medical machine learning |
title_fullStr | Second opinion needed: communicating uncertainty in medical machine learning |
title_full_unstemmed | Second opinion needed: communicating uncertainty in medical machine learning |
title_short | Second opinion needed: communicating uncertainty in medical machine learning |
title_sort | second opinion needed: communicating uncertainty in medical machine learning |
topic | Perspective |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7785732/ https://www.ncbi.nlm.nih.gov/pubmed/33402680 http://dx.doi.org/10.1038/s41746-020-00367-3 |
work_keys_str_mv | AT kompabenjamin secondopinionneededcommunicatinguncertaintyinmedicalmachinelearning AT snoekjasper secondopinionneededcommunicatinguncertaintyinmedicalmachinelearning AT beamandrewl secondopinionneededcommunicatinguncertaintyinmedicalmachinelearning |