Cargando…

Deciphering “the language of nature”: A transformer-based language model for deleterious mutations in proteins

Various machine-learning models, including deep neural network models, have already been developed to predict deleteriousness of missense (non-synonymous) mutations. Potential improvements to the current state of the art, however, may still benefit from a fresh look at the biological problem using m...

Descripción completa

Detalles Bibliográficos
Autores principales: Jiang, Theodore T., Fang, Li, Wang, Kai
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10448337/
https://www.ncbi.nlm.nih.gov/pubmed/37636282
http://dx.doi.org/10.1016/j.xinn.2023.100487
_version_ 1785094712948948992
author Jiang, Theodore T.
Fang, Li
Wang, Kai
author_facet Jiang, Theodore T.
Fang, Li
Wang, Kai
author_sort Jiang, Theodore T.
collection PubMed
description Various machine-learning models, including deep neural network models, have already been developed to predict deleteriousness of missense (non-synonymous) mutations. Potential improvements to the current state of the art, however, may still benefit from a fresh look at the biological problem using more sophisticated self-adaptive machine-learning approaches. Recent advances in the field of natural language processing show that transformer models—a type of deep neural network—to be particularly powerful at modeling sequence information with context dependence. In this study, we introduce MutFormer, a transformer-based model for the prediction of deleterious missense mutations, which uses reference and mutated protein sequences from the human genome as the primary features. MutFormer takes advantage of a combination of self-attention layers and convolutional layers to learn both long-range and short-range dependencies between amino acid mutations in a protein sequence. We first pre-trained MutFormer on reference protein sequences and mutated protein sequences resulting from common genetic variants observed in human populations. We next examined different fine-tuning methods to successfully apply the model to deleteriousness prediction of missense mutations. Finally, we evaluated MutFormer’s performance on multiple testing datasets. We found that MutFormer showed similar or improved performance over a variety of existing tools, including those that used conventional machine-learning approaches. In conclusion, MutFormer considers sequence features that are not explored in previous studies and can complement existing computational predictions or empirically generated functional scores to improve our understanding of disease variants.
format Online
Article
Text
id pubmed-10448337
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-104483372023-08-25 Deciphering “the language of nature”: A transformer-based language model for deleterious mutations in proteins Jiang, Theodore T. Fang, Li Wang, Kai Innovation (Camb) Article Various machine-learning models, including deep neural network models, have already been developed to predict deleteriousness of missense (non-synonymous) mutations. Potential improvements to the current state of the art, however, may still benefit from a fresh look at the biological problem using more sophisticated self-adaptive machine-learning approaches. Recent advances in the field of natural language processing show that transformer models—a type of deep neural network—to be particularly powerful at modeling sequence information with context dependence. In this study, we introduce MutFormer, a transformer-based model for the prediction of deleterious missense mutations, which uses reference and mutated protein sequences from the human genome as the primary features. MutFormer takes advantage of a combination of self-attention layers and convolutional layers to learn both long-range and short-range dependencies between amino acid mutations in a protein sequence. We first pre-trained MutFormer on reference protein sequences and mutated protein sequences resulting from common genetic variants observed in human populations. We next examined different fine-tuning methods to successfully apply the model to deleteriousness prediction of missense mutations. Finally, we evaluated MutFormer’s performance on multiple testing datasets. We found that MutFormer showed similar or improved performance over a variety of existing tools, including those that used conventional machine-learning approaches. In conclusion, MutFormer considers sequence features that are not explored in previous studies and can complement existing computational predictions or empirically generated functional scores to improve our understanding of disease variants. Elsevier 2023-07-27 /pmc/articles/PMC10448337/ /pubmed/37636282 http://dx.doi.org/10.1016/j.xinn.2023.100487 Text en © 2023 The Authors https://creativecommons.org/licenses/by/4.0/This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Jiang, Theodore T.
Fang, Li
Wang, Kai
Deciphering “the language of nature”: A transformer-based language model for deleterious mutations in proteins
title Deciphering “the language of nature”: A transformer-based language model for deleterious mutations in proteins
title_full Deciphering “the language of nature”: A transformer-based language model for deleterious mutations in proteins
title_fullStr Deciphering “the language of nature”: A transformer-based language model for deleterious mutations in proteins
title_full_unstemmed Deciphering “the language of nature”: A transformer-based language model for deleterious mutations in proteins
title_short Deciphering “the language of nature”: A transformer-based language model for deleterious mutations in proteins
title_sort deciphering “the language of nature”: a transformer-based language model for deleterious mutations in proteins
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10448337/
https://www.ncbi.nlm.nih.gov/pubmed/37636282
http://dx.doi.org/10.1016/j.xinn.2023.100487
work_keys_str_mv AT jiangtheodoret decipheringthelanguageofnatureatransformerbasedlanguagemodelfordeleteriousmutationsinproteins
AT fangli decipheringthelanguageofnatureatransformerbasedlanguagemodelfordeleteriousmutationsinproteins
AT wangkai decipheringthelanguageofnatureatransformerbasedlanguagemodelfordeleteriousmutationsinproteins