Cargando…
Challenges and opportunities to computationally deconvolve heterogeneous tissue with varying cell sizes using single cell RNA-sequencing datasets
Deconvolution of cell mixtures in “bulk” transcriptomic samples from homogenate human tissue is important for understanding the pathologies of diseases. However, several experimental and computational challenges remain in developing and implementing transcriptomics-based deconvolution approaches, es...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Cornell University
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10197733/ https://www.ncbi.nlm.nih.gov/pubmed/37214135 |
Sumario: | Deconvolution of cell mixtures in “bulk” transcriptomic samples from homogenate human tissue is important for understanding the pathologies of diseases. However, several experimental and computational challenges remain in developing and implementing transcriptomics-based deconvolution approaches, especially those using a single cell/nuclei RNA-seq reference atlas, which are becoming rapidly available across many tissues. Notably, deconvolution algorithms are frequently developed using samples from tissues with similar cell sizes. However, brain tissue or immune cell populations have cell types with substantially different cell sizes, total mRNA expression, and transcriptional activity. When existing deconvolution approaches are applied to these tissues, these systematic differences in cell sizes and transcriptomic activity confound accurate cell proportion estimates and instead may quantify total mRNA content. Furthermore, there is a lack of standard reference atlases and computational approaches to facilitate integrative analyses, including not only bulk and single cell/nuclei RNA-seq data, but also new data modalities from spatial -omic or imaging approaches. New multi-assay datasets need to be collected with orthogonal data types generated from the same tissue block and the same individual, to serve as a “gold standard” for evaluating new and existing deconvolution methods. Below, we discuss these key challenges and how they can be addressed with the acquisition of new datasets and approaches to analysis. |
---|