Cargando…

Double attention recurrent convolution neural network for answer selection

Answer selection is one of the key steps in many question answering (QA) applications. In this paper, a new deep model with two kinds of attention is proposed for answer selection: the double attention recurrent convolution neural network (DARCNN). Double attention means self-attention and cross-att...

Descripción completa

Detalles Bibliográficos
Autores principales: Bao, Ganchao, Wei, Yuan, Sun, Xin, Zhang, Hongli
Formato: Online Artículo Texto
Lenguaje:English
Publicado: The Royal Society 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7277251/
https://www.ncbi.nlm.nih.gov/pubmed/32537190
http://dx.doi.org/10.1098/rsos.191517
_version_ 1783543080849244160
author Bao, Ganchao
Wei, Yuan
Sun, Xin
Zhang, Hongli
author_facet Bao, Ganchao
Wei, Yuan
Sun, Xin
Zhang, Hongli
author_sort Bao, Ganchao
collection PubMed
description Answer selection is one of the key steps in many question answering (QA) applications. In this paper, a new deep model with two kinds of attention is proposed for answer selection: the double attention recurrent convolution neural network (DARCNN). Double attention means self-attention and cross-attention. The design inspiration of this model came from the transformer in the domain of machine translation. Self-attention can directly calculate dependencies between words regardless of the distance. However, self-attention ignores the distinction between its surrounding words and other words. Thus, we design a decay self-attention that prioritizes local words in a sentence. In addition, cross-attention is established to achieve interaction between question and candidate answer. With the outputs of self-attention and decay self-attention, we can get two kinds of interactive information via cross-attention. Finally, using the feature vectors of the question and answer, elementwise multiplication is used to combine with them and multilayer perceptron is used to predict the matching score. Experimental results on four QA datasets containing Chinese and English show that DARCNN performs better than other answer selection models, thereby demonstrating the effectiveness of self-attention, decay self-attention and cross-attention in answer selection tasks.
format Online
Article
Text
id pubmed-7277251
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher The Royal Society
record_format MEDLINE/PubMed
spelling pubmed-72772512020-06-11 Double attention recurrent convolution neural network for answer selection Bao, Ganchao Wei, Yuan Sun, Xin Zhang, Hongli R Soc Open Sci Computer Science and artificial intelligence Answer selection is one of the key steps in many question answering (QA) applications. In this paper, a new deep model with two kinds of attention is proposed for answer selection: the double attention recurrent convolution neural network (DARCNN). Double attention means self-attention and cross-attention. The design inspiration of this model came from the transformer in the domain of machine translation. Self-attention can directly calculate dependencies between words regardless of the distance. However, self-attention ignores the distinction between its surrounding words and other words. Thus, we design a decay self-attention that prioritizes local words in a sentence. In addition, cross-attention is established to achieve interaction between question and candidate answer. With the outputs of self-attention and decay self-attention, we can get two kinds of interactive information via cross-attention. Finally, using the feature vectors of the question and answer, elementwise multiplication is used to combine with them and multilayer perceptron is used to predict the matching score. Experimental results on four QA datasets containing Chinese and English show that DARCNN performs better than other answer selection models, thereby demonstrating the effectiveness of self-attention, decay self-attention and cross-attention in answer selection tasks. The Royal Society 2020-05-20 /pmc/articles/PMC7277251/ /pubmed/32537190 http://dx.doi.org/10.1098/rsos.191517 Text en © 2020 The Authors. http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/licenses/by/4.0/http://creativecommons.org/licenses/by/4.0/Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, provided the original author and source are credited.
spellingShingle Computer Science and artificial intelligence
Bao, Ganchao
Wei, Yuan
Sun, Xin
Zhang, Hongli
Double attention recurrent convolution neural network for answer selection
title Double attention recurrent convolution neural network for answer selection
title_full Double attention recurrent convolution neural network for answer selection
title_fullStr Double attention recurrent convolution neural network for answer selection
title_full_unstemmed Double attention recurrent convolution neural network for answer selection
title_short Double attention recurrent convolution neural network for answer selection
title_sort double attention recurrent convolution neural network for answer selection
topic Computer Science and artificial intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7277251/
https://www.ncbi.nlm.nih.gov/pubmed/32537190
http://dx.doi.org/10.1098/rsos.191517
work_keys_str_mv AT baoganchao doubleattentionrecurrentconvolutionneuralnetworkforanswerselection
AT weiyuan doubleattentionrecurrentconvolutionneuralnetworkforanswerselection
AT sunxin doubleattentionrecurrentconvolutionneuralnetworkforanswerselection
AT zhanghongli doubleattentionrecurrentconvolutionneuralnetworkforanswerselection