Cargando…
Graph Clustering with High-Order Contrastive Learning
Graph clustering is a fundamental and challenging task in unsupervised learning. It has achieved great progress due to contrastive learning. However, we find that there are two problems that need to be addressed: (1) The augmentations in most graph contrastive clustering methods are manual, which ca...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10606795/ https://www.ncbi.nlm.nih.gov/pubmed/37895553 http://dx.doi.org/10.3390/e25101432 |
_version_ | 1785127401438576640 |
---|---|
author | Li, Wang Zhu, En Wang, Siwei Guo, Xifeng |
author_facet | Li, Wang Zhu, En Wang, Siwei Guo, Xifeng |
author_sort | Li, Wang |
collection | PubMed |
description | Graph clustering is a fundamental and challenging task in unsupervised learning. It has achieved great progress due to contrastive learning. However, we find that there are two problems that need to be addressed: (1) The augmentations in most graph contrastive clustering methods are manual, which can result in semantic drift. (2) Contrastive learning is usually implemented on the feature level, ignoring the structure level, which can lead to sub-optimal performance. In this work, we propose a method termed Graph Clustering with High-Order Contrastive Learning (GCHCL) to solve these problems. First, we construct two views by Laplacian smoothing raw features with different normalizations and design a structure alignment loss to force these two views to be mapped into the same space. Second, we build a contrastive similarity matrix with two structure-based similarity matrices and force it to align with an identity matrix. In this way, our designed contrastive learning encompasses a larger neighborhood, enabling our model to learn clustering-friendly embeddings without the need for an extra clustering module. In addition, our model can be trained on a large dataset. Extensive experiments on five datasets validate the effectiveness of our model. For example, compared to the second-best baselines on four small and medium datasets, our model achieved an average improvement of 3% in accuracy. For the largest dataset, our model achieved an accuracy score of 81.92%, whereas the compared baselines encountered out-of-memory issues. |
format | Online Article Text |
id | pubmed-10606795 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-106067952023-10-28 Graph Clustering with High-Order Contrastive Learning Li, Wang Zhu, En Wang, Siwei Guo, Xifeng Entropy (Basel) Article Graph clustering is a fundamental and challenging task in unsupervised learning. It has achieved great progress due to contrastive learning. However, we find that there are two problems that need to be addressed: (1) The augmentations in most graph contrastive clustering methods are manual, which can result in semantic drift. (2) Contrastive learning is usually implemented on the feature level, ignoring the structure level, which can lead to sub-optimal performance. In this work, we propose a method termed Graph Clustering with High-Order Contrastive Learning (GCHCL) to solve these problems. First, we construct two views by Laplacian smoothing raw features with different normalizations and design a structure alignment loss to force these two views to be mapped into the same space. Second, we build a contrastive similarity matrix with two structure-based similarity matrices and force it to align with an identity matrix. In this way, our designed contrastive learning encompasses a larger neighborhood, enabling our model to learn clustering-friendly embeddings without the need for an extra clustering module. In addition, our model can be trained on a large dataset. Extensive experiments on five datasets validate the effectiveness of our model. For example, compared to the second-best baselines on four small and medium datasets, our model achieved an average improvement of 3% in accuracy. For the largest dataset, our model achieved an accuracy score of 81.92%, whereas the compared baselines encountered out-of-memory issues. MDPI 2023-10-10 /pmc/articles/PMC10606795/ /pubmed/37895553 http://dx.doi.org/10.3390/e25101432 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Li, Wang Zhu, En Wang, Siwei Guo, Xifeng Graph Clustering with High-Order Contrastive Learning |
title | Graph Clustering with High-Order Contrastive Learning |
title_full | Graph Clustering with High-Order Contrastive Learning |
title_fullStr | Graph Clustering with High-Order Contrastive Learning |
title_full_unstemmed | Graph Clustering with High-Order Contrastive Learning |
title_short | Graph Clustering with High-Order Contrastive Learning |
title_sort | graph clustering with high-order contrastive learning |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10606795/ https://www.ncbi.nlm.nih.gov/pubmed/37895553 http://dx.doi.org/10.3390/e25101432 |
work_keys_str_mv | AT liwang graphclusteringwithhighordercontrastivelearning AT zhuen graphclusteringwithhighordercontrastivelearning AT wangsiwei graphclusteringwithhighordercontrastivelearning AT guoxifeng graphclusteringwithhighordercontrastivelearning |