Cargando…

Mixture-of-Experts Variational Autoencoder for clustering and generating from similarity-based representations on single cell data

Clustering high-dimensional data, such as images or biological measurements, is a long-standing problem and has been studied extensively. Recently, Deep Clustering has gained popularity due to its flexibility in fitting the specific peculiarities of complex data. Here we introduce the Mixture-of-Exp...

Descripción completa

Detalles Bibliográficos
Autores principales: Kopf, Andreas, Fortuin, Vincent, Somnath, Vignesh Ram, Claassen, Manfred
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8277074/
https://www.ncbi.nlm.nih.gov/pubmed/34191792
http://dx.doi.org/10.1371/journal.pcbi.1009086
_version_ 1783722012051505152
author Kopf, Andreas
Fortuin, Vincent
Somnath, Vignesh Ram
Claassen, Manfred
author_facet Kopf, Andreas
Fortuin, Vincent
Somnath, Vignesh Ram
Claassen, Manfred
author_sort Kopf, Andreas
collection PubMed
description Clustering high-dimensional data, such as images or biological measurements, is a long-standing problem and has been studied extensively. Recently, Deep Clustering has gained popularity due to its flexibility in fitting the specific peculiarities of complex data. Here we introduce the Mixture-of-Experts Similarity Variational Autoencoder (MoE-Sim-VAE), a novel generative clustering model. The model can learn multi-modal distributions of high-dimensional data and use these to generate realistic data with high efficacy and efficiency. MoE-Sim-VAE is based on a Variational Autoencoder (VAE), where the decoder consists of a Mixture-of-Experts (MoE) architecture. This specific architecture allows for various modes of the data to be automatically learned by means of the experts. Additionally, we encourage the lower dimensional latent representation of our model to follow a Gaussian mixture distribution and to accurately represent the similarities between the data points. We assess the performance of our model on the MNIST benchmark data set and challenging real-world tasks of clustering mouse organs from single-cell RNA-sequencing measurements and defining cell subpopulations from mass cytometry (CyTOF) measurements on hundreds of different datasets. MoE-Sim-VAE exhibits superior clustering performance on all these tasks in comparison to the baselines as well as competitor methods.
format Online
Article
Text
id pubmed-8277074
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-82770742021-07-20 Mixture-of-Experts Variational Autoencoder for clustering and generating from similarity-based representations on single cell data Kopf, Andreas Fortuin, Vincent Somnath, Vignesh Ram Claassen, Manfred PLoS Comput Biol Research Article Clustering high-dimensional data, such as images or biological measurements, is a long-standing problem and has been studied extensively. Recently, Deep Clustering has gained popularity due to its flexibility in fitting the specific peculiarities of complex data. Here we introduce the Mixture-of-Experts Similarity Variational Autoencoder (MoE-Sim-VAE), a novel generative clustering model. The model can learn multi-modal distributions of high-dimensional data and use these to generate realistic data with high efficacy and efficiency. MoE-Sim-VAE is based on a Variational Autoencoder (VAE), where the decoder consists of a Mixture-of-Experts (MoE) architecture. This specific architecture allows for various modes of the data to be automatically learned by means of the experts. Additionally, we encourage the lower dimensional latent representation of our model to follow a Gaussian mixture distribution and to accurately represent the similarities between the data points. We assess the performance of our model on the MNIST benchmark data set and challenging real-world tasks of clustering mouse organs from single-cell RNA-sequencing measurements and defining cell subpopulations from mass cytometry (CyTOF) measurements on hundreds of different datasets. MoE-Sim-VAE exhibits superior clustering performance on all these tasks in comparison to the baselines as well as competitor methods. Public Library of Science 2021-06-30 /pmc/articles/PMC8277074/ /pubmed/34191792 http://dx.doi.org/10.1371/journal.pcbi.1009086 Text en © 2021 Kopf et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Kopf, Andreas
Fortuin, Vincent
Somnath, Vignesh Ram
Claassen, Manfred
Mixture-of-Experts Variational Autoencoder for clustering and generating from similarity-based representations on single cell data
title Mixture-of-Experts Variational Autoencoder for clustering and generating from similarity-based representations on single cell data
title_full Mixture-of-Experts Variational Autoencoder for clustering and generating from similarity-based representations on single cell data
title_fullStr Mixture-of-Experts Variational Autoencoder for clustering and generating from similarity-based representations on single cell data
title_full_unstemmed Mixture-of-Experts Variational Autoencoder for clustering and generating from similarity-based representations on single cell data
title_short Mixture-of-Experts Variational Autoencoder for clustering and generating from similarity-based representations on single cell data
title_sort mixture-of-experts variational autoencoder for clustering and generating from similarity-based representations on single cell data
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8277074/
https://www.ncbi.nlm.nih.gov/pubmed/34191792
http://dx.doi.org/10.1371/journal.pcbi.1009086
work_keys_str_mv AT kopfandreas mixtureofexpertsvariationalautoencoderforclusteringandgeneratingfromsimilaritybasedrepresentationsonsinglecelldata
AT fortuinvincent mixtureofexpertsvariationalautoencoderforclusteringandgeneratingfromsimilaritybasedrepresentationsonsinglecelldata
AT somnathvigneshram mixtureofexpertsvariationalautoencoderforclusteringandgeneratingfromsimilaritybasedrepresentationsonsinglecelldata
AT claassenmanfred mixtureofexpertsvariationalautoencoderforclusteringandgeneratingfromsimilaritybasedrepresentationsonsinglecelldata