Cargando…

CAMR: cross-aligned multimodal representation learning for cancer survival prediction

MOTIVATION: Accurately predicting cancer survival is crucial for helping clinicians to plan appropriate treatments, which largely improves the life quality of cancer patients and spares the related medical costs. Recent advances in survival prediction methods suggest that integrating complementary i...

Descripción completa

Detalles Bibliográficos
Autores principales: Wu, Xingqi, Shi, Yi, Wang, Minghui, Li, Ao
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Oxford University Press 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9857974/
https://www.ncbi.nlm.nih.gov/pubmed/36637188
http://dx.doi.org/10.1093/bioinformatics/btad025
_version_ 1784873982767398912
author Wu, Xingqi
Shi, Yi
Wang, Minghui
Li, Ao
author_facet Wu, Xingqi
Shi, Yi
Wang, Minghui
Li, Ao
author_sort Wu, Xingqi
collection PubMed
description MOTIVATION: Accurately predicting cancer survival is crucial for helping clinicians to plan appropriate treatments, which largely improves the life quality of cancer patients and spares the related medical costs. Recent advances in survival prediction methods suggest that integrating complementary information from different modalities, e.g. histopathological images and genomic data, plays a key role in enhancing predictive performance. Despite promising results obtained by existing multimodal methods, the disparate and heterogeneous characteristics of multimodal data cause the so-called modality gap problem, which brings in dramatically diverse modality representations in feature space. Consequently, detrimental modality gaps make it difficult for comprehensive integration of multimodal information via representation learning and therefore pose a great challenge to further improvements of cancer survival prediction. RESULTS: To solve the above problems, we propose a novel method called cross-aligned multimodal representation learning (CAMR), which generates both modality-invariant and -specific representations for more accurate cancer survival prediction. Specifically, a cross-modality representation alignment learning network is introduced to reduce modality gaps by effectively learning modality-invariant representations in a common subspace, which is achieved by aligning the distributions of different modality representations through adversarial training. Besides, we adopt a cross-modality fusion module to fuse modality-invariant representations into a unified cross-modality representation for each patient. Meanwhile, CAMR learns modality-specific representations which complement modality-invariant representations and therefore provides a holistic view of the multimodal data for cancer survival prediction. Comprehensive experiment results demonstrate that CAMR can successfully narrow modality gaps and consistently yields better performance than other survival prediction methods using multimodal data. AVAILABILITY AND IMPLEMENTATION: CAMR is freely available at https://github.com/wxq-ustc/CAMR. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.
format Online
Article
Text
id pubmed-9857974
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Oxford University Press
record_format MEDLINE/PubMed
spelling pubmed-98579742023-01-23 CAMR: cross-aligned multimodal representation learning for cancer survival prediction Wu, Xingqi Shi, Yi Wang, Minghui Li, Ao Bioinformatics Original Paper MOTIVATION: Accurately predicting cancer survival is crucial for helping clinicians to plan appropriate treatments, which largely improves the life quality of cancer patients and spares the related medical costs. Recent advances in survival prediction methods suggest that integrating complementary information from different modalities, e.g. histopathological images and genomic data, plays a key role in enhancing predictive performance. Despite promising results obtained by existing multimodal methods, the disparate and heterogeneous characteristics of multimodal data cause the so-called modality gap problem, which brings in dramatically diverse modality representations in feature space. Consequently, detrimental modality gaps make it difficult for comprehensive integration of multimodal information via representation learning and therefore pose a great challenge to further improvements of cancer survival prediction. RESULTS: To solve the above problems, we propose a novel method called cross-aligned multimodal representation learning (CAMR), which generates both modality-invariant and -specific representations for more accurate cancer survival prediction. Specifically, a cross-modality representation alignment learning network is introduced to reduce modality gaps by effectively learning modality-invariant representations in a common subspace, which is achieved by aligning the distributions of different modality representations through adversarial training. Besides, we adopt a cross-modality fusion module to fuse modality-invariant representations into a unified cross-modality representation for each patient. Meanwhile, CAMR learns modality-specific representations which complement modality-invariant representations and therefore provides a holistic view of the multimodal data for cancer survival prediction. Comprehensive experiment results demonstrate that CAMR can successfully narrow modality gaps and consistently yields better performance than other survival prediction methods using multimodal data. AVAILABILITY AND IMPLEMENTATION: CAMR is freely available at https://github.com/wxq-ustc/CAMR. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online. Oxford University Press 2023-01-13 /pmc/articles/PMC9857974/ /pubmed/36637188 http://dx.doi.org/10.1093/bioinformatics/btad025 Text en © The Author(s) 2023. Published by Oxford University Press. https://creativecommons.org/licenses/by/4.0/This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Original Paper
Wu, Xingqi
Shi, Yi
Wang, Minghui
Li, Ao
CAMR: cross-aligned multimodal representation learning for cancer survival prediction
title CAMR: cross-aligned multimodal representation learning for cancer survival prediction
title_full CAMR: cross-aligned multimodal representation learning for cancer survival prediction
title_fullStr CAMR: cross-aligned multimodal representation learning for cancer survival prediction
title_full_unstemmed CAMR: cross-aligned multimodal representation learning for cancer survival prediction
title_short CAMR: cross-aligned multimodal representation learning for cancer survival prediction
title_sort camr: cross-aligned multimodal representation learning for cancer survival prediction
topic Original Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9857974/
https://www.ncbi.nlm.nih.gov/pubmed/36637188
http://dx.doi.org/10.1093/bioinformatics/btad025
work_keys_str_mv AT wuxingqi camrcrossalignedmultimodalrepresentationlearningforcancersurvivalprediction
AT shiyi camrcrossalignedmultimodalrepresentationlearningforcancersurvivalprediction
AT wangminghui camrcrossalignedmultimodalrepresentationlearningforcancersurvivalprediction
AT liao camrcrossalignedmultimodalrepresentationlearningforcancersurvivalprediction