Cargando…
Explainable uncertainty quantifications for deep learning-based molecular property prediction
Quantifying uncertainty in machine learning is important in new research areas with scarce high-quality data. In this work, we develop an explainable uncertainty quantification method for deep learning-based molecular property prediction. This method can capture aleatoric and epistemic uncertainties...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer International Publishing
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9898940/ https://www.ncbi.nlm.nih.gov/pubmed/36737786 http://dx.doi.org/10.1186/s13321-023-00682-3 |
_version_ | 1784882537985736704 |
---|---|
author | Yang, Chu-I Li, Yi-Pei |
author_facet | Yang, Chu-I Li, Yi-Pei |
author_sort | Yang, Chu-I |
collection | PubMed |
description | Quantifying uncertainty in machine learning is important in new research areas with scarce high-quality data. In this work, we develop an explainable uncertainty quantification method for deep learning-based molecular property prediction. This method can capture aleatoric and epistemic uncertainties separately and attribute the uncertainties to atoms present in the molecule. The atom-based uncertainty method provides an extra layer of chemical insight to the estimated uncertainties, i.e., one can analyze individual atomic uncertainty values to diagnose the chemical component that introduces uncertainty to the prediction. Our experiments suggest that atomic uncertainty can detect unseen chemical structures and identify chemical species whose data are potentially associated with significant noise. Furthermore, we propose a post-hoc calibration method to refine the uncertainty quantified by ensemble models for better confidence interval estimates. This work improves uncertainty calibration and provides a framework for assessing whether and why a prediction should be considered unreliable. GRAPHICAL ABSTRACT: [Image: see text] SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s13321-023-00682-3. |
format | Online Article Text |
id | pubmed-9898940 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Springer International Publishing |
record_format | MEDLINE/PubMed |
spelling | pubmed-98989402023-02-05 Explainable uncertainty quantifications for deep learning-based molecular property prediction Yang, Chu-I Li, Yi-Pei J Cheminform Research Quantifying uncertainty in machine learning is important in new research areas with scarce high-quality data. In this work, we develop an explainable uncertainty quantification method for deep learning-based molecular property prediction. This method can capture aleatoric and epistemic uncertainties separately and attribute the uncertainties to atoms present in the molecule. The atom-based uncertainty method provides an extra layer of chemical insight to the estimated uncertainties, i.e., one can analyze individual atomic uncertainty values to diagnose the chemical component that introduces uncertainty to the prediction. Our experiments suggest that atomic uncertainty can detect unseen chemical structures and identify chemical species whose data are potentially associated with significant noise. Furthermore, we propose a post-hoc calibration method to refine the uncertainty quantified by ensemble models for better confidence interval estimates. This work improves uncertainty calibration and provides a framework for assessing whether and why a prediction should be considered unreliable. GRAPHICAL ABSTRACT: [Image: see text] SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s13321-023-00682-3. Springer International Publishing 2023-02-03 /pmc/articles/PMC9898940/ /pubmed/36737786 http://dx.doi.org/10.1186/s13321-023-00682-3 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data. |
spellingShingle | Research Yang, Chu-I Li, Yi-Pei Explainable uncertainty quantifications for deep learning-based molecular property prediction |
title | Explainable uncertainty quantifications for deep learning-based molecular property prediction |
title_full | Explainable uncertainty quantifications for deep learning-based molecular property prediction |
title_fullStr | Explainable uncertainty quantifications for deep learning-based molecular property prediction |
title_full_unstemmed | Explainable uncertainty quantifications for deep learning-based molecular property prediction |
title_short | Explainable uncertainty quantifications for deep learning-based molecular property prediction |
title_sort | explainable uncertainty quantifications for deep learning-based molecular property prediction |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9898940/ https://www.ncbi.nlm.nih.gov/pubmed/36737786 http://dx.doi.org/10.1186/s13321-023-00682-3 |
work_keys_str_mv | AT yangchui explainableuncertaintyquantificationsfordeeplearningbasedmolecularpropertyprediction AT liyipei explainableuncertaintyquantificationsfordeeplearningbasedmolecularpropertyprediction |