Cargando…

Improving Network Representation Learning via Dynamic Random Walk, Self-Attention and Vertex Attributes-Driven Laplacian Space Optimization

Network data analysis is a crucial method for mining complicated object interactions. In recent years, random walk and neural-language-model-based network representation learning (NRL) approaches have been widely used for network data analysis. However, these NRL approaches suffer from the following...

Descripción completa

Detalles Bibliográficos
Autores principales: Hu, Shengxiang, Zhang, Bofeng, Lv, Hehe, Chang, Furong, Zhou, Chenyang, Wu, Liangrui, Zou, Guobing
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9498033/
https://www.ncbi.nlm.nih.gov/pubmed/36141099
http://dx.doi.org/10.3390/e24091213
_version_ 1784794655598051328
author Hu, Shengxiang
Zhang, Bofeng
Lv, Hehe
Chang, Furong
Zhou, Chenyang
Wu, Liangrui
Zou, Guobing
author_facet Hu, Shengxiang
Zhang, Bofeng
Lv, Hehe
Chang, Furong
Zhou, Chenyang
Wu, Liangrui
Zou, Guobing
author_sort Hu, Shengxiang
collection PubMed
description Network data analysis is a crucial method for mining complicated object interactions. In recent years, random walk and neural-language-model-based network representation learning (NRL) approaches have been widely used for network data analysis. However, these NRL approaches suffer from the following deficiencies: firstly, because the random walk procedure is based on symmetric node similarity and fixed probability distribution, the sampled vertices’ sequences may lose local community structure information; secondly, because the feature extraction capacity of the shallow neural language model is limited, they can only extract the local structural features of networks; and thirdly, these approaches require specially designed mechanisms for different downstream tasks to integrate vertex attributes of various types. We conducted an in-depth investigation to address the aforementioned issues and propose a novel general NRL framework called dynamic structure and vertex attribute fusion network embedding, which firstly defines an asymmetric similarity and h-hop dynamic random walk strategy to guide the random walk process to preserve the network’s local community structure in walked vertex sequences. Next, we train a self-attention-based sequence prediction model on the walked vertex sequences to simultaneously learn the vertices’ local and global structural features. Finally, we introduce an attributes-driven Laplacian space optimization to converge the process of structural feature extraction and attribute feature extraction. The proposed approach is exhaustively evaluated by means of node visualization and classification on multiple benchmark datasets, and achieves superior results compared to baseline approaches.
format Online
Article
Text
id pubmed-9498033
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-94980332022-09-23 Improving Network Representation Learning via Dynamic Random Walk, Self-Attention and Vertex Attributes-Driven Laplacian Space Optimization Hu, Shengxiang Zhang, Bofeng Lv, Hehe Chang, Furong Zhou, Chenyang Wu, Liangrui Zou, Guobing Entropy (Basel) Article Network data analysis is a crucial method for mining complicated object interactions. In recent years, random walk and neural-language-model-based network representation learning (NRL) approaches have been widely used for network data analysis. However, these NRL approaches suffer from the following deficiencies: firstly, because the random walk procedure is based on symmetric node similarity and fixed probability distribution, the sampled vertices’ sequences may lose local community structure information; secondly, because the feature extraction capacity of the shallow neural language model is limited, they can only extract the local structural features of networks; and thirdly, these approaches require specially designed mechanisms for different downstream tasks to integrate vertex attributes of various types. We conducted an in-depth investigation to address the aforementioned issues and propose a novel general NRL framework called dynamic structure and vertex attribute fusion network embedding, which firstly defines an asymmetric similarity and h-hop dynamic random walk strategy to guide the random walk process to preserve the network’s local community structure in walked vertex sequences. Next, we train a self-attention-based sequence prediction model on the walked vertex sequences to simultaneously learn the vertices’ local and global structural features. Finally, we introduce an attributes-driven Laplacian space optimization to converge the process of structural feature extraction and attribute feature extraction. The proposed approach is exhaustively evaluated by means of node visualization and classification on multiple benchmark datasets, and achieves superior results compared to baseline approaches. MDPI 2022-08-30 /pmc/articles/PMC9498033/ /pubmed/36141099 http://dx.doi.org/10.3390/e24091213 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Hu, Shengxiang
Zhang, Bofeng
Lv, Hehe
Chang, Furong
Zhou, Chenyang
Wu, Liangrui
Zou, Guobing
Improving Network Representation Learning via Dynamic Random Walk, Self-Attention and Vertex Attributes-Driven Laplacian Space Optimization
title Improving Network Representation Learning via Dynamic Random Walk, Self-Attention and Vertex Attributes-Driven Laplacian Space Optimization
title_full Improving Network Representation Learning via Dynamic Random Walk, Self-Attention and Vertex Attributes-Driven Laplacian Space Optimization
title_fullStr Improving Network Representation Learning via Dynamic Random Walk, Self-Attention and Vertex Attributes-Driven Laplacian Space Optimization
title_full_unstemmed Improving Network Representation Learning via Dynamic Random Walk, Self-Attention and Vertex Attributes-Driven Laplacian Space Optimization
title_short Improving Network Representation Learning via Dynamic Random Walk, Self-Attention and Vertex Attributes-Driven Laplacian Space Optimization
title_sort improving network representation learning via dynamic random walk, self-attention and vertex attributes-driven laplacian space optimization
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9498033/
https://www.ncbi.nlm.nih.gov/pubmed/36141099
http://dx.doi.org/10.3390/e24091213
work_keys_str_mv AT hushengxiang improvingnetworkrepresentationlearningviadynamicrandomwalkselfattentionandvertexattributesdrivenlaplacianspaceoptimization
AT zhangbofeng improvingnetworkrepresentationlearningviadynamicrandomwalkselfattentionandvertexattributesdrivenlaplacianspaceoptimization
AT lvhehe improvingnetworkrepresentationlearningviadynamicrandomwalkselfattentionandvertexattributesdrivenlaplacianspaceoptimization
AT changfurong improvingnetworkrepresentationlearningviadynamicrandomwalkselfattentionandvertexattributesdrivenlaplacianspaceoptimization
AT zhouchenyang improvingnetworkrepresentationlearningviadynamicrandomwalkselfattentionandvertexattributesdrivenlaplacianspaceoptimization
AT wuliangrui improvingnetworkrepresentationlearningviadynamicrandomwalkselfattentionandvertexattributesdrivenlaplacianspaceoptimization
AT zouguobing improvingnetworkrepresentationlearningviadynamicrandomwalkselfattentionandvertexattributesdrivenlaplacianspaceoptimization