Cargando…
Light attention predicts protein location from the language of life
SUMMARY: Although knowing where a protein functions in a cell is important to characterize biological processes, this information remains unavailable for most known proteins. Machine learning narrows the gap through predictions from expert-designed input features leveraging information from multiple...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Oxford University Press
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9710637/ https://www.ncbi.nlm.nih.gov/pubmed/36700108 http://dx.doi.org/10.1093/bioadv/vbab035 |
_version_ | 1784841409651539968 |
---|---|
author | Stärk, Hannes Dallago, Christian Heinzinger, Michael Rost, Burkhard |
author_facet | Stärk, Hannes Dallago, Christian Heinzinger, Michael Rost, Burkhard |
author_sort | Stärk, Hannes |
collection | PubMed |
description | SUMMARY: Although knowing where a protein functions in a cell is important to characterize biological processes, this information remains unavailable for most known proteins. Machine learning narrows the gap through predictions from expert-designed input features leveraging information from multiple sequence alignments (MSAs) that is resource expensive to generate. Here, we showcased using embeddings from protein language models for competitive localization prediction without MSAs. Our lightweight deep neural network architecture used a softmax weighted aggregation mechanism with linear complexity in sequence length referred to as light attention. The method significantly outperformed the state-of-the-art (SOTA) for 10 localization classes by about 8 percentage points (Q10). So far, this might be the highest improvement of just embeddings over MSAs. Our new test set highlighted the limits of standard static datasets: while inviting new models, they might not suffice to claim improvements over the SOTA. AVAILABILITY AND IMPLEMENTATION: The novel models are available as a web-service at http://embed.protein.properties. Code needed to reproduce results is provided at https://github.com/HannesStark/protein-localization. Predictions for the human proteome are available at https://zenodo.org/record/5047020. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics Advances online. |
format | Online Article Text |
id | pubmed-9710637 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Oxford University Press |
record_format | MEDLINE/PubMed |
spelling | pubmed-97106372023-01-24 Light attention predicts protein location from the language of life Stärk, Hannes Dallago, Christian Heinzinger, Michael Rost, Burkhard Bioinform Adv Original Article SUMMARY: Although knowing where a protein functions in a cell is important to characterize biological processes, this information remains unavailable for most known proteins. Machine learning narrows the gap through predictions from expert-designed input features leveraging information from multiple sequence alignments (MSAs) that is resource expensive to generate. Here, we showcased using embeddings from protein language models for competitive localization prediction without MSAs. Our lightweight deep neural network architecture used a softmax weighted aggregation mechanism with linear complexity in sequence length referred to as light attention. The method significantly outperformed the state-of-the-art (SOTA) for 10 localization classes by about 8 percentage points (Q10). So far, this might be the highest improvement of just embeddings over MSAs. Our new test set highlighted the limits of standard static datasets: while inviting new models, they might not suffice to claim improvements over the SOTA. AVAILABILITY AND IMPLEMENTATION: The novel models are available as a web-service at http://embed.protein.properties. Code needed to reproduce results is provided at https://github.com/HannesStark/protein-localization. Predictions for the human proteome are available at https://zenodo.org/record/5047020. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics Advances online. Oxford University Press 2021-11-19 /pmc/articles/PMC9710637/ /pubmed/36700108 http://dx.doi.org/10.1093/bioadv/vbab035 Text en © The Author(s) 2021. Published by Oxford University Press. https://creativecommons.org/licenses/by/4.0/This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Original Article Stärk, Hannes Dallago, Christian Heinzinger, Michael Rost, Burkhard Light attention predicts protein location from the language of life |
title | Light attention predicts protein location from the language of life |
title_full | Light attention predicts protein location from the language of life |
title_fullStr | Light attention predicts protein location from the language of life |
title_full_unstemmed | Light attention predicts protein location from the language of life |
title_short | Light attention predicts protein location from the language of life |
title_sort | light attention predicts protein location from the language of life |
topic | Original Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9710637/ https://www.ncbi.nlm.nih.gov/pubmed/36700108 http://dx.doi.org/10.1093/bioadv/vbab035 |
work_keys_str_mv | AT starkhannes lightattentionpredictsproteinlocationfromthelanguageoflife AT dallagochristian lightattentionpredictsproteinlocationfromthelanguageoflife AT heinzingermichael lightattentionpredictsproteinlocationfromthelanguageoflife AT rostburkhard lightattentionpredictsproteinlocationfromthelanguageoflife |