Cargando…

MonoNet: enhancing interpretability in neural networks via monotonic features

MOTIVATION: Being able to interpret and explain the predictions made by a machine learning model is of fundamental importance. Unfortunately, a trade-off between accuracy and interpretability is often observed. As a result, the interest in developing more transparent yet powerful models has grown co...

Descripción completa

Detalles Bibliográficos
Autores principales: Nguyen, An-Phi, Moreno, Dana Lea, Le-Bel, Nicolas, Rodríguez Martínez, María
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Oxford University Press 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10152389/
https://www.ncbi.nlm.nih.gov/pubmed/37143924
http://dx.doi.org/10.1093/bioadv/vbad016
_version_ 1785035733810020352
author Nguyen, An-Phi
Moreno, Dana Lea
Le-Bel, Nicolas
Rodríguez Martínez, María
author_facet Nguyen, An-Phi
Moreno, Dana Lea
Le-Bel, Nicolas
Rodríguez Martínez, María
author_sort Nguyen, An-Phi
collection PubMed
description MOTIVATION: Being able to interpret and explain the predictions made by a machine learning model is of fundamental importance. Unfortunately, a trade-off between accuracy and interpretability is often observed. As a result, the interest in developing more transparent yet powerful models has grown considerably over the past few years. Interpretable models are especially needed in high-stake scenarios, such as computational biology and medical informatics, where erroneous or biased models’ predictions can have deleterious consequences for a patient. Furthermore, understanding the inner workings of a model can help increase the trust in the model. RESULTS: We introduce a novel structurally constrained neural network, MonoNet, which is more transparent, while still retaining the same learning capabilities of traditional neural models. MonoNet contains monotonically connected layers that ensure monotonic relationships between (high-level) features and outputs. We show how, by leveraging the monotonic constraint in conjunction with other post hoc strategies, we can interpret our model. To demonstrate our model’s capabilities, we train MonoNet to classify cellular populations in a single-cell proteomic dataset. We also demonstrate MonoNet’s performance in other benchmark datasets in different domains, including non-biological applications (in the Supplementary Material). Our experiments show how our model can achieve good performance, while providing at the same time useful biological insights about the most important biomarkers. We finally carry out an information-theoretical analysis to show how the monotonic constraint actively contributes to the learning process of the model. AVAILABILITY AND IMPLEMENTATION: Code and sample data are available at https://github.com/phineasng/mononet. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics Advances online.
format Online
Article
Text
id pubmed-10152389
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Oxford University Press
record_format MEDLINE/PubMed
spelling pubmed-101523892023-05-03 MonoNet: enhancing interpretability in neural networks via monotonic features Nguyen, An-Phi Moreno, Dana Lea Le-Bel, Nicolas Rodríguez Martínez, María Bioinform Adv Original Paper MOTIVATION: Being able to interpret and explain the predictions made by a machine learning model is of fundamental importance. Unfortunately, a trade-off between accuracy and interpretability is often observed. As a result, the interest in developing more transparent yet powerful models has grown considerably over the past few years. Interpretable models are especially needed in high-stake scenarios, such as computational biology and medical informatics, where erroneous or biased models’ predictions can have deleterious consequences for a patient. Furthermore, understanding the inner workings of a model can help increase the trust in the model. RESULTS: We introduce a novel structurally constrained neural network, MonoNet, which is more transparent, while still retaining the same learning capabilities of traditional neural models. MonoNet contains monotonically connected layers that ensure monotonic relationships between (high-level) features and outputs. We show how, by leveraging the monotonic constraint in conjunction with other post hoc strategies, we can interpret our model. To demonstrate our model’s capabilities, we train MonoNet to classify cellular populations in a single-cell proteomic dataset. We also demonstrate MonoNet’s performance in other benchmark datasets in different domains, including non-biological applications (in the Supplementary Material). Our experiments show how our model can achieve good performance, while providing at the same time useful biological insights about the most important biomarkers. We finally carry out an information-theoretical analysis to show how the monotonic constraint actively contributes to the learning process of the model. AVAILABILITY AND IMPLEMENTATION: Code and sample data are available at https://github.com/phineasng/mononet. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics Advances online. Oxford University Press 2023-02-23 /pmc/articles/PMC10152389/ /pubmed/37143924 http://dx.doi.org/10.1093/bioadv/vbad016 Text en © The Author(s) 2023. Published by Oxford University Press. https://creativecommons.org/licenses/by/4.0/This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Original Paper
Nguyen, An-Phi
Moreno, Dana Lea
Le-Bel, Nicolas
Rodríguez Martínez, María
MonoNet: enhancing interpretability in neural networks via monotonic features
title MonoNet: enhancing interpretability in neural networks via monotonic features
title_full MonoNet: enhancing interpretability in neural networks via monotonic features
title_fullStr MonoNet: enhancing interpretability in neural networks via monotonic features
title_full_unstemmed MonoNet: enhancing interpretability in neural networks via monotonic features
title_short MonoNet: enhancing interpretability in neural networks via monotonic features
title_sort mononet: enhancing interpretability in neural networks via monotonic features
topic Original Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10152389/
https://www.ncbi.nlm.nih.gov/pubmed/37143924
http://dx.doi.org/10.1093/bioadv/vbad016
work_keys_str_mv AT nguyenanphi mononetenhancinginterpretabilityinneuralnetworksviamonotonicfeatures
AT morenodanalea mononetenhancinginterpretabilityinneuralnetworksviamonotonicfeatures
AT lebelnicolas mononetenhancinginterpretabilityinneuralnetworksviamonotonicfeatures
AT rodriguezmartinezmaria mononetenhancinginterpretabilityinneuralnetworksviamonotonicfeatures