Cargando…
ProteinBERT: a universal deep-learning model of protein sequence and function
SUMMARY: Self-supervised deep language modeling has shown unprecedented success across natural language tasks, and has recently been repurposed to biological sequences. However, existing models and pretraining methods are designed and optimized for text analysis. We introduce ProteinBERT, a deep lan...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Oxford University Press
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9386727/ https://www.ncbi.nlm.nih.gov/pubmed/35020807 http://dx.doi.org/10.1093/bioinformatics/btac020 |
_version_ | 1784769877106491392 |
---|---|
author | Brandes, Nadav Ofer, Dan Peleg, Yam Rappoport, Nadav Linial, Michal |
author_facet | Brandes, Nadav Ofer, Dan Peleg, Yam Rappoport, Nadav Linial, Michal |
author_sort | Brandes, Nadav |
collection | PubMed |
description | SUMMARY: Self-supervised deep language modeling has shown unprecedented success across natural language tasks, and has recently been repurposed to biological sequences. However, existing models and pretraining methods are designed and optimized for text analysis. We introduce ProteinBERT, a deep language model specifically designed for proteins. Our pretraining scheme combines language modeling with a novel task of Gene Ontology (GO) annotation prediction. We introduce novel architectural elements that make the model highly efficient and flexible to long sequences. The architecture of ProteinBERT consists of both local and global representations, allowing end-to-end processing of these types of inputs and outputs. ProteinBERT obtains near state-of-the-art performance, and sometimes exceeds it, on multiple benchmarks covering diverse protein properties (including protein structure, post-translational modifications and biophysical attributes), despite using a far smaller and faster model than competing deep-learning methods. Overall, ProteinBERT provides an efficient framework for rapidly training protein predictors, even with limited labeled data. AVAILABILITY AND IMPLEMENTATION: Code and pretrained model weights are available at https://github.com/nadavbra/protein_bert. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online. |
format | Online Article Text |
id | pubmed-9386727 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Oxford University Press |
record_format | MEDLINE/PubMed |
spelling | pubmed-93867272022-08-19 ProteinBERT: a universal deep-learning model of protein sequence and function Brandes, Nadav Ofer, Dan Peleg, Yam Rappoport, Nadav Linial, Michal Bioinformatics Original Papers SUMMARY: Self-supervised deep language modeling has shown unprecedented success across natural language tasks, and has recently been repurposed to biological sequences. However, existing models and pretraining methods are designed and optimized for text analysis. We introduce ProteinBERT, a deep language model specifically designed for proteins. Our pretraining scheme combines language modeling with a novel task of Gene Ontology (GO) annotation prediction. We introduce novel architectural elements that make the model highly efficient and flexible to long sequences. The architecture of ProteinBERT consists of both local and global representations, allowing end-to-end processing of these types of inputs and outputs. ProteinBERT obtains near state-of-the-art performance, and sometimes exceeds it, on multiple benchmarks covering diverse protein properties (including protein structure, post-translational modifications and biophysical attributes), despite using a far smaller and faster model than competing deep-learning methods. Overall, ProteinBERT provides an efficient framework for rapidly training protein predictors, even with limited labeled data. AVAILABILITY AND IMPLEMENTATION: Code and pretrained model weights are available at https://github.com/nadavbra/protein_bert. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online. Oxford University Press 2022-02-10 /pmc/articles/PMC9386727/ /pubmed/35020807 http://dx.doi.org/10.1093/bioinformatics/btac020 Text en © The Author(s) 2022. Published by Oxford University Press. https://creativecommons.org/licenses/by/4.0/This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Original Papers Brandes, Nadav Ofer, Dan Peleg, Yam Rappoport, Nadav Linial, Michal ProteinBERT: a universal deep-learning model of protein sequence and function |
title | ProteinBERT: a universal deep-learning model of protein sequence and function |
title_full | ProteinBERT: a universal deep-learning model of protein sequence and function |
title_fullStr | ProteinBERT: a universal deep-learning model of protein sequence and function |
title_full_unstemmed | ProteinBERT: a universal deep-learning model of protein sequence and function |
title_short | ProteinBERT: a universal deep-learning model of protein sequence and function |
title_sort | proteinbert: a universal deep-learning model of protein sequence and function |
topic | Original Papers |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9386727/ https://www.ncbi.nlm.nih.gov/pubmed/35020807 http://dx.doi.org/10.1093/bioinformatics/btac020 |
work_keys_str_mv | AT brandesnadav proteinbertauniversaldeeplearningmodelofproteinsequenceandfunction AT oferdan proteinbertauniversaldeeplearningmodelofproteinsequenceandfunction AT pelegyam proteinbertauniversaldeeplearningmodelofproteinsequenceandfunction AT rappoportnadav proteinbertauniversaldeeplearningmodelofproteinsequenceandfunction AT linialmichal proteinbertauniversaldeeplearningmodelofproteinsequenceandfunction |