Cargando…

Maximum entropy methods for extracting the learned features of deep neural networks

New architectures of multilayer artificial neural networks and new methods for training them are rapidly revolutionizing the application of machine learning in diverse fields, including business, social science, physical sciences, and biology. Interpreting deep neural networks, however, currently re...

Descripción completa

Detalles Bibliográficos
Autores principales: Finnegan, Alex, Song, Jun S.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5679649/
https://www.ncbi.nlm.nih.gov/pubmed/29084280
http://dx.doi.org/10.1371/journal.pcbi.1005836
_version_ 1783277622076112896
author Finnegan, Alex
Song, Jun S.
author_facet Finnegan, Alex
Song, Jun S.
author_sort Finnegan, Alex
collection PubMed
description New architectures of multilayer artificial neural networks and new methods for training them are rapidly revolutionizing the application of machine learning in diverse fields, including business, social science, physical sciences, and biology. Interpreting deep neural networks, however, currently remains elusive, and a critical challenge lies in understanding which meaningful features a network is actually learning. We present a general method for interpreting deep neural networks and extracting network-learned features from input data. We describe our algorithm in the context of biological sequence analysis. Our approach, based on ideas from statistical physics, samples from the maximum entropy distribution over possible sequences, anchored at an input sequence and subject to constraints implied by the empirical function learned by a network. Using our framework, we demonstrate that local transcription factor binding motifs can be identified from a network trained on ChIP-seq data and that nucleosome positioning signals are indeed learned by a network trained on chemical cleavage nucleosome maps. Imposing a further constraint on the maximum entropy distribution also allows us to probe whether a network is learning global sequence features, such as the high GC content in nucleosome-rich regions. This work thus provides valuable mathematical tools for interpreting and extracting learned features from feed-forward neural networks.
format Online
Article
Text
id pubmed-5679649
institution National Center for Biotechnology Information
language English
publishDate 2017
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-56796492017-11-18 Maximum entropy methods for extracting the learned features of deep neural networks Finnegan, Alex Song, Jun S. PLoS Comput Biol Research Article New architectures of multilayer artificial neural networks and new methods for training them are rapidly revolutionizing the application of machine learning in diverse fields, including business, social science, physical sciences, and biology. Interpreting deep neural networks, however, currently remains elusive, and a critical challenge lies in understanding which meaningful features a network is actually learning. We present a general method for interpreting deep neural networks and extracting network-learned features from input data. We describe our algorithm in the context of biological sequence analysis. Our approach, based on ideas from statistical physics, samples from the maximum entropy distribution over possible sequences, anchored at an input sequence and subject to constraints implied by the empirical function learned by a network. Using our framework, we demonstrate that local transcription factor binding motifs can be identified from a network trained on ChIP-seq data and that nucleosome positioning signals are indeed learned by a network trained on chemical cleavage nucleosome maps. Imposing a further constraint on the maximum entropy distribution also allows us to probe whether a network is learning global sequence features, such as the high GC content in nucleosome-rich regions. This work thus provides valuable mathematical tools for interpreting and extracting learned features from feed-forward neural networks. Public Library of Science 2017-10-30 /pmc/articles/PMC5679649/ /pubmed/29084280 http://dx.doi.org/10.1371/journal.pcbi.1005836 Text en © 2017 Finnegan, Song http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Finnegan, Alex
Song, Jun S.
Maximum entropy methods for extracting the learned features of deep neural networks
title Maximum entropy methods for extracting the learned features of deep neural networks
title_full Maximum entropy methods for extracting the learned features of deep neural networks
title_fullStr Maximum entropy methods for extracting the learned features of deep neural networks
title_full_unstemmed Maximum entropy methods for extracting the learned features of deep neural networks
title_short Maximum entropy methods for extracting the learned features of deep neural networks
title_sort maximum entropy methods for extracting the learned features of deep neural networks
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5679649/
https://www.ncbi.nlm.nih.gov/pubmed/29084280
http://dx.doi.org/10.1371/journal.pcbi.1005836
work_keys_str_mv AT finneganalex maximumentropymethodsforextractingthelearnedfeaturesofdeepneuralnetworks
AT songjuns maximumentropymethodsforextractingthelearnedfeaturesofdeepneuralnetworks