Cargando…

N-GPETS: Neural Attention Graph-Based Pretrained Statistical Model for Extractive Text Summarization

The extractive summarization approach involves selecting the source document's salient sentences to build a summary. One of the most important aspects of extractive summarization is learning and modelling cross-sentence associations. Inspired by the popularity of Transformer-based Bidirectional...

Descripción completa

Detalles Bibliográficos
Autores principales: Umair, Muhammad, Alam, Iftikhar, Khan, Atif, Khan, Inayat, Ullah, Niamat, Momand, Mohammad Yusuf
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Hindawi 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9708337/
https://www.ncbi.nlm.nih.gov/pubmed/36458230
http://dx.doi.org/10.1155/2022/6241373
_version_ 1784840907238932480
author Umair, Muhammad
Alam, Iftikhar
Khan, Atif
Khan, Inayat
Ullah, Niamat
Momand, Mohammad Yusuf
author_facet Umair, Muhammad
Alam, Iftikhar
Khan, Atif
Khan, Inayat
Ullah, Niamat
Momand, Mohammad Yusuf
author_sort Umair, Muhammad
collection PubMed
description The extractive summarization approach involves selecting the source document's salient sentences to build a summary. One of the most important aspects of extractive summarization is learning and modelling cross-sentence associations. Inspired by the popularity of Transformer-based Bidirectional Encoder Representations (BERT) pretrained linguistic model and graph attention network (GAT) having a sophisticated network that captures intersentence associations, this research work proposes a novel neural model N-GPETS by combining heterogeneous graph attention network with BERT model along with statistical approach using TF-IDF values for extractive summarization task. Apart from sentence nodes, N-GPETS also works with different semantic word nodes of varying granularity levels that serve as a link between sentences, improving intersentence interaction. Furthermore, proposed N-GPETS becomes more improved and feature-rich by integrating graph layer with BERT encoder at graph initialization step rather than employing other neural network encoders such as CNN or LSTM. To the best of our knowledge, this work is the first attempt to combine the BERT encoder and TF-IDF values of the entire document with a heterogeneous attention graph structure for the extractive summarization task. The empirical outcomes on benchmark news data sets CNN/DM show that the proposed model N-GPETS gets favorable results in comparison with other heterogeneous graph structures employing the BERT model and graph structures without the BERT model.
format Online
Article
Text
id pubmed-9708337
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Hindawi
record_format MEDLINE/PubMed
spelling pubmed-97083372022-11-30 N-GPETS: Neural Attention Graph-Based Pretrained Statistical Model for Extractive Text Summarization Umair, Muhammad Alam, Iftikhar Khan, Atif Khan, Inayat Ullah, Niamat Momand, Mohammad Yusuf Comput Intell Neurosci Research Article The extractive summarization approach involves selecting the source document's salient sentences to build a summary. One of the most important aspects of extractive summarization is learning and modelling cross-sentence associations. Inspired by the popularity of Transformer-based Bidirectional Encoder Representations (BERT) pretrained linguistic model and graph attention network (GAT) having a sophisticated network that captures intersentence associations, this research work proposes a novel neural model N-GPETS by combining heterogeneous graph attention network with BERT model along with statistical approach using TF-IDF values for extractive summarization task. Apart from sentence nodes, N-GPETS also works with different semantic word nodes of varying granularity levels that serve as a link between sentences, improving intersentence interaction. Furthermore, proposed N-GPETS becomes more improved and feature-rich by integrating graph layer with BERT encoder at graph initialization step rather than employing other neural network encoders such as CNN or LSTM. To the best of our knowledge, this work is the first attempt to combine the BERT encoder and TF-IDF values of the entire document with a heterogeneous attention graph structure for the extractive summarization task. The empirical outcomes on benchmark news data sets CNN/DM show that the proposed model N-GPETS gets favorable results in comparison with other heterogeneous graph structures employing the BERT model and graph structures without the BERT model. Hindawi 2022-11-22 /pmc/articles/PMC9708337/ /pubmed/36458230 http://dx.doi.org/10.1155/2022/6241373 Text en Copyright © 2022 Muhammad Umair et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Research Article
Umair, Muhammad
Alam, Iftikhar
Khan, Atif
Khan, Inayat
Ullah, Niamat
Momand, Mohammad Yusuf
N-GPETS: Neural Attention Graph-Based Pretrained Statistical Model for Extractive Text Summarization
title N-GPETS: Neural Attention Graph-Based Pretrained Statistical Model for Extractive Text Summarization
title_full N-GPETS: Neural Attention Graph-Based Pretrained Statistical Model for Extractive Text Summarization
title_fullStr N-GPETS: Neural Attention Graph-Based Pretrained Statistical Model for Extractive Text Summarization
title_full_unstemmed N-GPETS: Neural Attention Graph-Based Pretrained Statistical Model for Extractive Text Summarization
title_short N-GPETS: Neural Attention Graph-Based Pretrained Statistical Model for Extractive Text Summarization
title_sort n-gpets: neural attention graph-based pretrained statistical model for extractive text summarization
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9708337/
https://www.ncbi.nlm.nih.gov/pubmed/36458230
http://dx.doi.org/10.1155/2022/6241373
work_keys_str_mv AT umairmuhammad ngpetsneuralattentiongraphbasedpretrainedstatisticalmodelforextractivetextsummarization
AT alamiftikhar ngpetsneuralattentiongraphbasedpretrainedstatisticalmodelforextractivetextsummarization
AT khanatif ngpetsneuralattentiongraphbasedpretrainedstatisticalmodelforextractivetextsummarization
AT khaninayat ngpetsneuralattentiongraphbasedpretrainedstatisticalmodelforextractivetextsummarization
AT ullahniamat ngpetsneuralattentiongraphbasedpretrainedstatisticalmodelforextractivetextsummarization
AT momandmohammadyusuf ngpetsneuralattentiongraphbasedpretrainedstatisticalmodelforextractivetextsummarization