Cargando…
Benchmarking and Analysis of Protein Docking Performance in Rosetta v3.2
RosettaDock has been increasingly used in protein docking and design strategies in order to predict the structure of protein-protein interfaces. Here we test capabilities of RosettaDock 3.2, part of the newly developed Rosetta v3.2 modeling suite, against Docking Benchmark 3.0, and compare it with R...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2011
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3149062/ https://www.ncbi.nlm.nih.gov/pubmed/21829626 http://dx.doi.org/10.1371/journal.pone.0022477 |
_version_ | 1782209414653018112 |
---|---|
author | Chaudhury, Sidhartha Berrondo, Monica Weitzner, Brian D. Muthu, Pravin Bergman, Hannah Gray, Jeffrey J. |
author_facet | Chaudhury, Sidhartha Berrondo, Monica Weitzner, Brian D. Muthu, Pravin Bergman, Hannah Gray, Jeffrey J. |
author_sort | Chaudhury, Sidhartha |
collection | PubMed |
description | RosettaDock has been increasingly used in protein docking and design strategies in order to predict the structure of protein-protein interfaces. Here we test capabilities of RosettaDock 3.2, part of the newly developed Rosetta v3.2 modeling suite, against Docking Benchmark 3.0, and compare it with RosettaDock v2.3, the latest version of the previous Rosetta software package. The benchmark contains a diverse set of 116 docking targets including 22 antibody-antigen complexes, 33 enzyme-inhibitor complexes, and 60 ‘other’ complexes. These targets were further classified by expected docking difficulty into 84 rigid-body targets, 17 medium targets, and 14 difficult targets. We carried out local docking perturbations for each target, using the unbound structures when available, in both RosettaDock v2.3 and v3.2. Overall the performances of RosettaDock v2.3 and v3.2 were similar. RosettaDock v3.2 achieved 56 docking funnels, compared to 49 in v2.3. A breakdown of docking performance by protein complex type shows that RosettaDock v3.2 achieved docking funnels for 63% of antibody-antigen targets, 62% of enzyme-inhibitor targets, and 35% of ‘other’ targets. In terms of docking difficulty, RosettaDock v3.2 achieved funnels for 58% of rigid-body targets, 30% of medium targets, and 14% of difficult targets. For targets that failed, we carry out additional analyses to identify the cause of failure, which showed that binding-induced backbone conformation changes account for a majority of failures. We also present a bootstrap statistical analysis that quantifies the reliability of the stochastic docking results. Finally, we demonstrate the additional functionality available in RosettaDock v3.2 by incorporating small-molecules and non-protein co-factors in docking of a smaller target set. This study marks the most extensive benchmarking of the RosettaDock module to date and establishes a baseline for future research in protein interface modeling and structure prediction. |
format | Online Article Text |
id | pubmed-3149062 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2011 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-31490622011-08-09 Benchmarking and Analysis of Protein Docking Performance in Rosetta v3.2 Chaudhury, Sidhartha Berrondo, Monica Weitzner, Brian D. Muthu, Pravin Bergman, Hannah Gray, Jeffrey J. PLoS One Research Article RosettaDock has been increasingly used in protein docking and design strategies in order to predict the structure of protein-protein interfaces. Here we test capabilities of RosettaDock 3.2, part of the newly developed Rosetta v3.2 modeling suite, against Docking Benchmark 3.0, and compare it with RosettaDock v2.3, the latest version of the previous Rosetta software package. The benchmark contains a diverse set of 116 docking targets including 22 antibody-antigen complexes, 33 enzyme-inhibitor complexes, and 60 ‘other’ complexes. These targets were further classified by expected docking difficulty into 84 rigid-body targets, 17 medium targets, and 14 difficult targets. We carried out local docking perturbations for each target, using the unbound structures when available, in both RosettaDock v2.3 and v3.2. Overall the performances of RosettaDock v2.3 and v3.2 were similar. RosettaDock v3.2 achieved 56 docking funnels, compared to 49 in v2.3. A breakdown of docking performance by protein complex type shows that RosettaDock v3.2 achieved docking funnels for 63% of antibody-antigen targets, 62% of enzyme-inhibitor targets, and 35% of ‘other’ targets. In terms of docking difficulty, RosettaDock v3.2 achieved funnels for 58% of rigid-body targets, 30% of medium targets, and 14% of difficult targets. For targets that failed, we carry out additional analyses to identify the cause of failure, which showed that binding-induced backbone conformation changes account for a majority of failures. We also present a bootstrap statistical analysis that quantifies the reliability of the stochastic docking results. Finally, we demonstrate the additional functionality available in RosettaDock v3.2 by incorporating small-molecules and non-protein co-factors in docking of a smaller target set. This study marks the most extensive benchmarking of the RosettaDock module to date and establishes a baseline for future research in protein interface modeling and structure prediction. Public Library of Science 2011-08-02 /pmc/articles/PMC3149062/ /pubmed/21829626 http://dx.doi.org/10.1371/journal.pone.0022477 Text en Chaudhury et al. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited. |
spellingShingle | Research Article Chaudhury, Sidhartha Berrondo, Monica Weitzner, Brian D. Muthu, Pravin Bergman, Hannah Gray, Jeffrey J. Benchmarking and Analysis of Protein Docking Performance in Rosetta v3.2 |
title | Benchmarking and Analysis of Protein Docking Performance in Rosetta v3.2 |
title_full | Benchmarking and Analysis of Protein Docking Performance in Rosetta v3.2 |
title_fullStr | Benchmarking and Analysis of Protein Docking Performance in Rosetta v3.2 |
title_full_unstemmed | Benchmarking and Analysis of Protein Docking Performance in Rosetta v3.2 |
title_short | Benchmarking and Analysis of Protein Docking Performance in Rosetta v3.2 |
title_sort | benchmarking and analysis of protein docking performance in rosetta v3.2 |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3149062/ https://www.ncbi.nlm.nih.gov/pubmed/21829626 http://dx.doi.org/10.1371/journal.pone.0022477 |
work_keys_str_mv | AT chaudhurysidhartha benchmarkingandanalysisofproteindockingperformanceinrosettav32 AT berrondomonica benchmarkingandanalysisofproteindockingperformanceinrosettav32 AT weitznerbriand benchmarkingandanalysisofproteindockingperformanceinrosettav32 AT muthupravin benchmarkingandanalysisofproteindockingperformanceinrosettav32 AT bergmanhannah benchmarkingandanalysisofproteindockingperformanceinrosettav32 AT grayjeffreyj benchmarkingandanalysisofproteindockingperformanceinrosettav32 |