Cargando…
Explainable multi-task learning for multi-modality biological data analysis
Current biotechnologies can simultaneously measure multiple high-dimensional modalities (e.g., RNA, DNA accessibility, and protein) from the same cells. A combination of different analytical tasks (e.g., multi-modal integration and cross-modal analysis) is required to comprehensively understand such...
Autores principales: | , , , , , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10156823/ https://www.ncbi.nlm.nih.gov/pubmed/37137905 http://dx.doi.org/10.1038/s41467-023-37477-x |
_version_ | 1785036621285949440 |
---|---|
author | Tang, Xin Zhang, Jiawei He, Yichun Zhang, Xinhe Lin, Zuwan Partarrieu, Sebastian Hanna, Emma Bou Ren, Zhaolin Shen, Hao Yang, Yuhong Wang, Xiao Li, Na Ding, Jie Liu, Jia |
author_facet | Tang, Xin Zhang, Jiawei He, Yichun Zhang, Xinhe Lin, Zuwan Partarrieu, Sebastian Hanna, Emma Bou Ren, Zhaolin Shen, Hao Yang, Yuhong Wang, Xiao Li, Na Ding, Jie Liu, Jia |
author_sort | Tang, Xin |
collection | PubMed |
description | Current biotechnologies can simultaneously measure multiple high-dimensional modalities (e.g., RNA, DNA accessibility, and protein) from the same cells. A combination of different analytical tasks (e.g., multi-modal integration and cross-modal analysis) is required to comprehensively understand such data, inferring how gene regulation drives biological diversity and functions. However, current analytical methods are designed to perform a single task, only providing a partial picture of the multi-modal data. Here, we present UnitedNet, an explainable multi-task deep neural network capable of integrating different tasks to analyze single-cell multi-modality data. Applied to various multi-modality datasets (e.g., Patch-seq, multiome ATAC + gene expression, and spatial transcriptomics), UnitedNet demonstrates similar or better accuracy in multi-modal integration and cross-modal prediction compared with state-of-the-art methods. Moreover, by dissecting the trained UnitedNet with the explainable machine learning algorithm, we can directly quantify the relationship between gene expression and other modalities with cell-type specificity. UnitedNet is a comprehensive end-to-end framework that could be broadly applicable to single-cell multi-modality biology. This framework has the potential to facilitate the discovery of cell-type-specific regulation kinetics across transcriptomics and other modalities. |
format | Online Article Text |
id | pubmed-10156823 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-101568232023-05-05 Explainable multi-task learning for multi-modality biological data analysis Tang, Xin Zhang, Jiawei He, Yichun Zhang, Xinhe Lin, Zuwan Partarrieu, Sebastian Hanna, Emma Bou Ren, Zhaolin Shen, Hao Yang, Yuhong Wang, Xiao Li, Na Ding, Jie Liu, Jia Nat Commun Article Current biotechnologies can simultaneously measure multiple high-dimensional modalities (e.g., RNA, DNA accessibility, and protein) from the same cells. A combination of different analytical tasks (e.g., multi-modal integration and cross-modal analysis) is required to comprehensively understand such data, inferring how gene regulation drives biological diversity and functions. However, current analytical methods are designed to perform a single task, only providing a partial picture of the multi-modal data. Here, we present UnitedNet, an explainable multi-task deep neural network capable of integrating different tasks to analyze single-cell multi-modality data. Applied to various multi-modality datasets (e.g., Patch-seq, multiome ATAC + gene expression, and spatial transcriptomics), UnitedNet demonstrates similar or better accuracy in multi-modal integration and cross-modal prediction compared with state-of-the-art methods. Moreover, by dissecting the trained UnitedNet with the explainable machine learning algorithm, we can directly quantify the relationship between gene expression and other modalities with cell-type specificity. UnitedNet is a comprehensive end-to-end framework that could be broadly applicable to single-cell multi-modality biology. This framework has the potential to facilitate the discovery of cell-type-specific regulation kinetics across transcriptomics and other modalities. Nature Publishing Group UK 2023-05-03 /pmc/articles/PMC10156823/ /pubmed/37137905 http://dx.doi.org/10.1038/s41467-023-37477-x Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Tang, Xin Zhang, Jiawei He, Yichun Zhang, Xinhe Lin, Zuwan Partarrieu, Sebastian Hanna, Emma Bou Ren, Zhaolin Shen, Hao Yang, Yuhong Wang, Xiao Li, Na Ding, Jie Liu, Jia Explainable multi-task learning for multi-modality biological data analysis |
title | Explainable multi-task learning for multi-modality biological data analysis |
title_full | Explainable multi-task learning for multi-modality biological data analysis |
title_fullStr | Explainable multi-task learning for multi-modality biological data analysis |
title_full_unstemmed | Explainable multi-task learning for multi-modality biological data analysis |
title_short | Explainable multi-task learning for multi-modality biological data analysis |
title_sort | explainable multi-task learning for multi-modality biological data analysis |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10156823/ https://www.ncbi.nlm.nih.gov/pubmed/37137905 http://dx.doi.org/10.1038/s41467-023-37477-x |
work_keys_str_mv | AT tangxin explainablemultitasklearningformultimodalitybiologicaldataanalysis AT zhangjiawei explainablemultitasklearningformultimodalitybiologicaldataanalysis AT heyichun explainablemultitasklearningformultimodalitybiologicaldataanalysis AT zhangxinhe explainablemultitasklearningformultimodalitybiologicaldataanalysis AT linzuwan explainablemultitasklearningformultimodalitybiologicaldataanalysis AT partarrieusebastian explainablemultitasklearningformultimodalitybiologicaldataanalysis AT hannaemmabou explainablemultitasklearningformultimodalitybiologicaldataanalysis AT renzhaolin explainablemultitasklearningformultimodalitybiologicaldataanalysis AT shenhao explainablemultitasklearningformultimodalitybiologicaldataanalysis AT yangyuhong explainablemultitasklearningformultimodalitybiologicaldataanalysis AT wangxiao explainablemultitasklearningformultimodalitybiologicaldataanalysis AT lina explainablemultitasklearningformultimodalitybiologicaldataanalysis AT dingjie explainablemultitasklearningformultimodalitybiologicaldataanalysis AT liujia explainablemultitasklearningformultimodalitybiologicaldataanalysis |