Cargando…

LINA: A Linearizing Neural Network Architecture for Accurate First-Order and Second-Order Interpretations

While neural networks can provide high predictive performance, it was a challenge to identify the salient features and important feature interactions used for their predictions. This represented a key hurdle for deploying neural networks in many biomedical applications that require interpretability,...

Descripción completa

Detalles Bibliográficos
Autores principales: BADRÉ, ADRIEN, PAN, CHONGLE
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9032252/
https://www.ncbi.nlm.nih.gov/pubmed/35462722
http://dx.doi.org/10.1109/access.2022.3163257
_version_ 1784692595928072192
author BADRÉ, ADRIEN
PAN, CHONGLE
author_facet BADRÉ, ADRIEN
PAN, CHONGLE
author_sort BADRÉ, ADRIEN
collection PubMed
description While neural networks can provide high predictive performance, it was a challenge to identify the salient features and important feature interactions used for their predictions. This represented a key hurdle for deploying neural networks in many biomedical applications that require interpretability, including predictive genomics. In this paper, linearizing neural network architecture (LINA) was developed here to provide both the first-order and the second-order interpretations on both the instance-wise and the model-wise levels. LINA combines the representational capacity of a deep inner attention neural network with a linearized intermediate representation for model interpretation. In comparison with DeepLIFT, LIME, Grad*Input and L2X, the first-order interpretation of LINA had better Spearman correlation with the ground-truth importance rankings of features in synthetic datasets. In comparison with NID and GEH, the second-order interpretation results from LINA achieved better precision for identification of the ground-truth feature interactions in synthetic datasets. These algorithms were further benchmarked using predictive genomics as a real-world application. LINA identified larger numbers of important single nucleotide polymorphisms (SNPs) and salient SNP interactions than the other algorithms at given false discovery rates. The results showed accurate and versatile model interpretation using LINA.
format Online
Article
Text
id pubmed-9032252
institution National Center for Biotechnology Information
language English
publishDate 2022
record_format MEDLINE/PubMed
spelling pubmed-90322522022-04-22 LINA: A Linearizing Neural Network Architecture for Accurate First-Order and Second-Order Interpretations BADRÉ, ADRIEN PAN, CHONGLE IEEE Access Article While neural networks can provide high predictive performance, it was a challenge to identify the salient features and important feature interactions used for their predictions. This represented a key hurdle for deploying neural networks in many biomedical applications that require interpretability, including predictive genomics. In this paper, linearizing neural network architecture (LINA) was developed here to provide both the first-order and the second-order interpretations on both the instance-wise and the model-wise levels. LINA combines the representational capacity of a deep inner attention neural network with a linearized intermediate representation for model interpretation. In comparison with DeepLIFT, LIME, Grad*Input and L2X, the first-order interpretation of LINA had better Spearman correlation with the ground-truth importance rankings of features in synthetic datasets. In comparison with NID and GEH, the second-order interpretation results from LINA achieved better precision for identification of the ground-truth feature interactions in synthetic datasets. These algorithms were further benchmarked using predictive genomics as a real-world application. LINA identified larger numbers of important single nucleotide polymorphisms (SNPs) and salient SNP interactions than the other algorithms at given false discovery rates. The results showed accurate and versatile model interpretation using LINA. 2022 2022-03-30 /pmc/articles/PMC9032252/ /pubmed/35462722 http://dx.doi.org/10.1109/access.2022.3163257 Text en https://creativecommons.org/licenses/by-nc-nd/4.0/This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/
spellingShingle Article
BADRÉ, ADRIEN
PAN, CHONGLE
LINA: A Linearizing Neural Network Architecture for Accurate First-Order and Second-Order Interpretations
title LINA: A Linearizing Neural Network Architecture for Accurate First-Order and Second-Order Interpretations
title_full LINA: A Linearizing Neural Network Architecture for Accurate First-Order and Second-Order Interpretations
title_fullStr LINA: A Linearizing Neural Network Architecture for Accurate First-Order and Second-Order Interpretations
title_full_unstemmed LINA: A Linearizing Neural Network Architecture for Accurate First-Order and Second-Order Interpretations
title_short LINA: A Linearizing Neural Network Architecture for Accurate First-Order and Second-Order Interpretations
title_sort lina: a linearizing neural network architecture for accurate first-order and second-order interpretations
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9032252/
https://www.ncbi.nlm.nih.gov/pubmed/35462722
http://dx.doi.org/10.1109/access.2022.3163257
work_keys_str_mv AT badreadrien linaalinearizingneuralnetworkarchitectureforaccuratefirstorderandsecondorderinterpretations
AT panchongle linaalinearizingneuralnetworkarchitectureforaccuratefirstorderandsecondorderinterpretations