Cargando…
Protein language models trained on multiple sequence alignments learn phylogenetic relationships
Self-supervised neural language models with attention have recently been applied to biological sequence data, advancing structure, function and mutational effect prediction. Some protein language models, including MSA Transformer and AlphaFold’s EvoFormer, take multiple sequence alignments (MSAs) of...
Autores principales: | Lupo, Umberto, Sgarbossa, Damiano, Bitbol, Anne-Florence |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9588007/ https://www.ncbi.nlm.nih.gov/pubmed/36273003 http://dx.doi.org/10.1038/s41467-022-34032-y |
Ejemplares similares
-
Generative power of a protein language model trained on multiple sequence alignments
por: Sgarbossa, Damiano, et al.
Publicado: (2023) -
Impact of phylogeny on structural contact inference from protein sequence data
por: Dietler, Nicola, et al.
Publicado: (2023) -
Phylogenetic correlations can suffice to infer protein partners from sequences
por: Marmier, Guillaume, et al.
Publicado: (2019) -
Inferring interaction partners from protein sequences using mutual information
por: Bitbol, Anne-Florence
Publicado: (2018) -
Leveraging protein language models for accurate multiple sequence alignments
por: McWhite, Claire D., et al.
Publicado: (2023)