Cargando…

A total crapshoot? Evaluating bioinformatic decisions in animal diet metabarcoding analyses

Metabarcoding studies provide a powerful approach to estimate the diversity and abundance of organisms in mixed communities in nature. While strategies exist for optimizing sample and sequence library preparation, best practices for bioinformatic processing of amplicon sequence data are lacking in a...

Descripción completa

Detalles Bibliográficos
Autores principales: O'Rourke, Devon R., Bokulich, Nicholas A., Jusino, Michelle A., MacManes, Matthew D., Foster, Jeffrey T.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley and Sons Inc. 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7520210/
https://www.ncbi.nlm.nih.gov/pubmed/33005342
http://dx.doi.org/10.1002/ece3.6594
_version_ 1783587736176820224
author O'Rourke, Devon R.
Bokulich, Nicholas A.
Jusino, Michelle A.
MacManes, Matthew D.
Foster, Jeffrey T.
author_facet O'Rourke, Devon R.
Bokulich, Nicholas A.
Jusino, Michelle A.
MacManes, Matthew D.
Foster, Jeffrey T.
author_sort O'Rourke, Devon R.
collection PubMed
description Metabarcoding studies provide a powerful approach to estimate the diversity and abundance of organisms in mixed communities in nature. While strategies exist for optimizing sample and sequence library preparation, best practices for bioinformatic processing of amplicon sequence data are lacking in animal diet studies. Here we evaluate how decisions made in core bioinformatic processes, including sequence filtering, database design, and classification, can influence animal metabarcoding results. We show that denoising methods have lower error rates compared to traditional clustering methods, although these differences are largely mitigated by removing low‐abundance sequence variants. We also found that available reference datasets from GenBank and BOLD for the animal marker gene cytochrome oxidase I (COI) can be complementary, and we discuss methods to improve existing databases to include versioned releases. Taxonomic classification methods can dramatically affect results. For example, the commonly used Barcode of Life Database (BOLD) Classification API assigned fewer names to samples from order through species levels using both a mock community and bat guano samples compared to all other classifiers (vsearch‐SINTAX and q2‐feature‐classifier's BLAST + LCA, VSEARCH + LCA, and Naive Bayes classifiers). The lack of consensus on bioinformatics best practices limits comparisons among studies and may introduce biases. Our work suggests that biological mock communities offer a useful standard to evaluate the myriad computational decisions impacting animal metabarcoding accuracy. Further, these comparisons highlight the need for continual evaluations as new tools are adopted to ensure that the inferences drawn reflect meaningful biology instead of digital artifacts.
format Online
Article
Text
id pubmed-7520210
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher John Wiley and Sons Inc.
record_format MEDLINE/PubMed
spelling pubmed-75202102020-09-30 A total crapshoot? Evaluating bioinformatic decisions in animal diet metabarcoding analyses O'Rourke, Devon R. Bokulich, Nicholas A. Jusino, Michelle A. MacManes, Matthew D. Foster, Jeffrey T. Ecol Evol Original Research Metabarcoding studies provide a powerful approach to estimate the diversity and abundance of organisms in mixed communities in nature. While strategies exist for optimizing sample and sequence library preparation, best practices for bioinformatic processing of amplicon sequence data are lacking in animal diet studies. Here we evaluate how decisions made in core bioinformatic processes, including sequence filtering, database design, and classification, can influence animal metabarcoding results. We show that denoising methods have lower error rates compared to traditional clustering methods, although these differences are largely mitigated by removing low‐abundance sequence variants. We also found that available reference datasets from GenBank and BOLD for the animal marker gene cytochrome oxidase I (COI) can be complementary, and we discuss methods to improve existing databases to include versioned releases. Taxonomic classification methods can dramatically affect results. For example, the commonly used Barcode of Life Database (BOLD) Classification API assigned fewer names to samples from order through species levels using both a mock community and bat guano samples compared to all other classifiers (vsearch‐SINTAX and q2‐feature‐classifier's BLAST + LCA, VSEARCH + LCA, and Naive Bayes classifiers). The lack of consensus on bioinformatics best practices limits comparisons among studies and may introduce biases. Our work suggests that biological mock communities offer a useful standard to evaluate the myriad computational decisions impacting animal metabarcoding accuracy. Further, these comparisons highlight the need for continual evaluations as new tools are adopted to ensure that the inferences drawn reflect meaningful biology instead of digital artifacts. John Wiley and Sons Inc. 2020-07-23 /pmc/articles/PMC7520210/ /pubmed/33005342 http://dx.doi.org/10.1002/ece3.6594 Text en © 2020 The Authors. Ecology and Evolution published by John Wiley & Sons Ltd This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
spellingShingle Original Research
O'Rourke, Devon R.
Bokulich, Nicholas A.
Jusino, Michelle A.
MacManes, Matthew D.
Foster, Jeffrey T.
A total crapshoot? Evaluating bioinformatic decisions in animal diet metabarcoding analyses
title A total crapshoot? Evaluating bioinformatic decisions in animal diet metabarcoding analyses
title_full A total crapshoot? Evaluating bioinformatic decisions in animal diet metabarcoding analyses
title_fullStr A total crapshoot? Evaluating bioinformatic decisions in animal diet metabarcoding analyses
title_full_unstemmed A total crapshoot? Evaluating bioinformatic decisions in animal diet metabarcoding analyses
title_short A total crapshoot? Evaluating bioinformatic decisions in animal diet metabarcoding analyses
title_sort total crapshoot? evaluating bioinformatic decisions in animal diet metabarcoding analyses
topic Original Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7520210/
https://www.ncbi.nlm.nih.gov/pubmed/33005342
http://dx.doi.org/10.1002/ece3.6594
work_keys_str_mv AT orourkedevonr atotalcrapshootevaluatingbioinformaticdecisionsinanimaldietmetabarcodinganalyses
AT bokulichnicholasa atotalcrapshootevaluatingbioinformaticdecisionsinanimaldietmetabarcodinganalyses
AT jusinomichellea atotalcrapshootevaluatingbioinformaticdecisionsinanimaldietmetabarcodinganalyses
AT macmanesmatthewd atotalcrapshootevaluatingbioinformaticdecisionsinanimaldietmetabarcodinganalyses
AT fosterjeffreyt atotalcrapshootevaluatingbioinformaticdecisionsinanimaldietmetabarcodinganalyses
AT orourkedevonr totalcrapshootevaluatingbioinformaticdecisionsinanimaldietmetabarcodinganalyses
AT bokulichnicholasa totalcrapshootevaluatingbioinformaticdecisionsinanimaldietmetabarcodinganalyses
AT jusinomichellea totalcrapshootevaluatingbioinformaticdecisionsinanimaldietmetabarcodinganalyses
AT macmanesmatthewd totalcrapshootevaluatingbioinformaticdecisionsinanimaldietmetabarcodinganalyses
AT fosterjeffreyt totalcrapshootevaluatingbioinformaticdecisionsinanimaldietmetabarcodinganalyses