Cargando…
Single-Cell Multimodal Prediction via Transformers
The recent development of multimodal single-cell technology has made the possibility of acquiring multiple omics data from individual cells, thereby enabling a deeper understanding of cellular states and dynamics. Nevertheless, the proliferation of multimodal single-cell data also introduces tremend...
Autores principales: | , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Cornell University
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10462176/ https://www.ncbi.nlm.nih.gov/pubmed/37645040 |
_version_ | 1785098002762825728 |
---|---|
author | Tang, Wenzhuo Wen, Hongzhi Liu, Renming Ding, Jiayuan Jin, Wei Xie, Yuying Liu, Hui Tang, Jiliang |
author_facet | Tang, Wenzhuo Wen, Hongzhi Liu, Renming Ding, Jiayuan Jin, Wei Xie, Yuying Liu, Hui Tang, Jiliang |
author_sort | Tang, Wenzhuo |
collection | PubMed |
description | The recent development of multimodal single-cell technology has made the possibility of acquiring multiple omics data from individual cells, thereby enabling a deeper understanding of cellular states and dynamics. Nevertheless, the proliferation of multimodal single-cell data also introduces tremendous challenges in modeling the complex interactions among different modalities. The recently advanced methods focus on constructing static interaction graphs and applying graph neural networks (GNNs) to learn from multimodal data. However, such static graphs can be suboptimal as they do not take advantage of the downstream task information; meanwhile GNNs also have some inherent limitations when deeply stacking GNN layers. To tackle these issues, in this work, we investigate how to leverage transformers for multimodal single-cell data in an end-to-end manner while exploiting downstream task information. In particular, we propose a scMoFormer framework which can readily incorporate external domain knowledge and model the interactions within each modality and cross modalities. Extensive experiments demonstrate that scMoFormer achieves superior performance on various benchmark datasets. Remarkably, scMoFormer won a Kaggle silver medal with the rank of 24/1221 (Top 2%) without ensemble in a NeurIPS 2022 competition(). Our implementation is publicly available at Github(). |
format | Online Article Text |
id | pubmed-10462176 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Cornell University |
record_format | MEDLINE/PubMed |
spelling | pubmed-104621762023-08-29 Single-Cell Multimodal Prediction via Transformers Tang, Wenzhuo Wen, Hongzhi Liu, Renming Ding, Jiayuan Jin, Wei Xie, Yuying Liu, Hui Tang, Jiliang ArXiv Article The recent development of multimodal single-cell technology has made the possibility of acquiring multiple omics data from individual cells, thereby enabling a deeper understanding of cellular states and dynamics. Nevertheless, the proliferation of multimodal single-cell data also introduces tremendous challenges in modeling the complex interactions among different modalities. The recently advanced methods focus on constructing static interaction graphs and applying graph neural networks (GNNs) to learn from multimodal data. However, such static graphs can be suboptimal as they do not take advantage of the downstream task information; meanwhile GNNs also have some inherent limitations when deeply stacking GNN layers. To tackle these issues, in this work, we investigate how to leverage transformers for multimodal single-cell data in an end-to-end manner while exploiting downstream task information. In particular, we propose a scMoFormer framework which can readily incorporate external domain knowledge and model the interactions within each modality and cross modalities. Extensive experiments demonstrate that scMoFormer achieves superior performance on various benchmark datasets. Remarkably, scMoFormer won a Kaggle silver medal with the rank of 24/1221 (Top 2%) without ensemble in a NeurIPS 2022 competition(). Our implementation is publicly available at Github(). Cornell University 2023-10-13 /pmc/articles/PMC10462176/ /pubmed/37645040 Text en https://creativecommons.org/licenses/by/4.0/This work is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/) , which allows reusers to distribute, remix, adapt, and build upon the material in any medium or format, so long as attribution is given to the creator. The license allows for commercial use. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). |
spellingShingle | Article Tang, Wenzhuo Wen, Hongzhi Liu, Renming Ding, Jiayuan Jin, Wei Xie, Yuying Liu, Hui Tang, Jiliang Single-Cell Multimodal Prediction via Transformers |
title | Single-Cell Multimodal Prediction via Transformers |
title_full | Single-Cell Multimodal Prediction via Transformers |
title_fullStr | Single-Cell Multimodal Prediction via Transformers |
title_full_unstemmed | Single-Cell Multimodal Prediction via Transformers |
title_short | Single-Cell Multimodal Prediction via Transformers |
title_sort | single-cell multimodal prediction via transformers |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10462176/ https://www.ncbi.nlm.nih.gov/pubmed/37645040 |
work_keys_str_mv | AT tangwenzhuo singlecellmultimodalpredictionviatransformers AT wenhongzhi singlecellmultimodalpredictionviatransformers AT liurenming singlecellmultimodalpredictionviatransformers AT dingjiayuan singlecellmultimodalpredictionviatransformers AT jinwei singlecellmultimodalpredictionviatransformers AT xieyuying singlecellmultimodalpredictionviatransformers AT liuhui singlecellmultimodalpredictionviatransformers AT tangjiliang singlecellmultimodalpredictionviatransformers |