Cargando…

A self-attention model for inferring cooperativity between regulatory features

Deep learning has demonstrated its predictive power in modeling complex biological phenomena such as gene expression. The value of these models hinges not only on their accuracy, but also on the ability to extract biologically relevant information from the trained models. While there has been much r...

Descripción completa

Detalles Bibliográficos
Autores principales: Ullah, Fahad, Ben-Hur, Asa
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Oxford University Press 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8287919/
https://www.ncbi.nlm.nih.gov/pubmed/33950192
http://dx.doi.org/10.1093/nar/gkab349
_version_ 1783724001792622592
author Ullah, Fahad
Ben-Hur, Asa
author_facet Ullah, Fahad
Ben-Hur, Asa
author_sort Ullah, Fahad
collection PubMed
description Deep learning has demonstrated its predictive power in modeling complex biological phenomena such as gene expression. The value of these models hinges not only on their accuracy, but also on the ability to extract biologically relevant information from the trained models. While there has been much recent work on developing feature attribution methods that discover the most important features for a given sequence, inferring cooperativity between regulatory elements, which is the hallmark of phenomena such as gene expression, remains an open problem. We present SATORI, a Self-ATtentiOn based model to detect Regulatory element Interactions. Our approach combines convolutional layers with a self-attention mechanism that helps us capture a global view of the landscape of interactions between regulatory elements in a sequence. A comprehensive evaluation demonstrates the ability of SATORI to identify numerous statistically significant TF-TF interactions, many of which have been previously reported. Our method is able to detect higher numbers of experimentally verified TF-TF interactions than existing methods, and has the advantage of not requiring a computationally expensive post-processing step. Finally, SATORI can be used for detection of any type of feature interaction in models that use a similar attention mechanism, and is not limited to the detection of TF-TF interactions.
format Online
Article
Text
id pubmed-8287919
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Oxford University Press
record_format MEDLINE/PubMed
spelling pubmed-82879192021-07-19 A self-attention model for inferring cooperativity between regulatory features Ullah, Fahad Ben-Hur, Asa Nucleic Acids Res Methods Online Deep learning has demonstrated its predictive power in modeling complex biological phenomena such as gene expression. The value of these models hinges not only on their accuracy, but also on the ability to extract biologically relevant information from the trained models. While there has been much recent work on developing feature attribution methods that discover the most important features for a given sequence, inferring cooperativity between regulatory elements, which is the hallmark of phenomena such as gene expression, remains an open problem. We present SATORI, a Self-ATtentiOn based model to detect Regulatory element Interactions. Our approach combines convolutional layers with a self-attention mechanism that helps us capture a global view of the landscape of interactions between regulatory elements in a sequence. A comprehensive evaluation demonstrates the ability of SATORI to identify numerous statistically significant TF-TF interactions, many of which have been previously reported. Our method is able to detect higher numbers of experimentally verified TF-TF interactions than existing methods, and has the advantage of not requiring a computationally expensive post-processing step. Finally, SATORI can be used for detection of any type of feature interaction in models that use a similar attention mechanism, and is not limited to the detection of TF-TF interactions. Oxford University Press 2021-05-05 /pmc/articles/PMC8287919/ /pubmed/33950192 http://dx.doi.org/10.1093/nar/gkab349 Text en © The Author(s) 2021. Published by Oxford University Press on behalf of Nucleic Acids Research. https://creativecommons.org/licenses/by/4.0/This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) ), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Methods Online
Ullah, Fahad
Ben-Hur, Asa
A self-attention model for inferring cooperativity between regulatory features
title A self-attention model for inferring cooperativity between regulatory features
title_full A self-attention model for inferring cooperativity between regulatory features
title_fullStr A self-attention model for inferring cooperativity between regulatory features
title_full_unstemmed A self-attention model for inferring cooperativity between regulatory features
title_short A self-attention model for inferring cooperativity between regulatory features
title_sort self-attention model for inferring cooperativity between regulatory features
topic Methods Online
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8287919/
https://www.ncbi.nlm.nih.gov/pubmed/33950192
http://dx.doi.org/10.1093/nar/gkab349
work_keys_str_mv AT ullahfahad aselfattentionmodelforinferringcooperativitybetweenregulatoryfeatures
AT benhurasa aselfattentionmodelforinferringcooperativitybetweenregulatoryfeatures
AT ullahfahad selfattentionmodelforinferringcooperativitybetweenregulatoryfeatures
AT benhurasa selfattentionmodelforinferringcooperativitybetweenregulatoryfeatures