Cargando…

RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the BERT Model Boosted by an Improved ABC Algorithm

Answer selection (AS) is a critical subtask of the open-domain question answering (QA) problem. The present paper proposes a method called RLAS-BIABC for AS, which is established on attention mechanism-based long short-term memory (LSTM) and the bidirectional encoder representations from transformer...

Descripción completa

Detalles Bibliográficos
Autores principales: Gharagozlou, Hamid, Mohammadzadeh, Javad, Bastanfard, Azam, Ghidary, Saeed Shiry
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Hindawi 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9106472/
https://www.ncbi.nlm.nih.gov/pubmed/35571722
http://dx.doi.org/10.1155/2022/7839840
_version_ 1784708292853891072
author Gharagozlou, Hamid
Mohammadzadeh, Javad
Bastanfard, Azam
Ghidary, Saeed Shiry
author_facet Gharagozlou, Hamid
Mohammadzadeh, Javad
Bastanfard, Azam
Ghidary, Saeed Shiry
author_sort Gharagozlou, Hamid
collection PubMed
description Answer selection (AS) is a critical subtask of the open-domain question answering (QA) problem. The present paper proposes a method called RLAS-BIABC for AS, which is established on attention mechanism-based long short-term memory (LSTM) and the bidirectional encoder representations from transformers (BERT) word embedding, enriched by an improved artificial bee colony (ABC) algorithm for pretraining and a reinforcement learning-based algorithm for training backpropagation (BP) algorithm. BERT can be comprised in downstream work and fine-tuned as a united task-specific architecture, and the pretrained BERT model can grab different linguistic effects. Existing algorithms typically train the AS model with positive-negative pairs for a two-class classifier. A positive pair contains a question and a genuine answer, while a negative one includes a question and a fake answer. The output should be one for positive and zero for negative pairs. Typically, negative pairs are more than positive, leading to an imbalanced classification that drastically reduces system performance. To deal with it, we define classification as a sequential decision-making process in which the agent takes a sample at each step and classifies it. For each classification operation, the agent receives a reward, in which the prize of the majority class is less than the reward of the minority class. Ultimately, the agent finds the optimal value for the policy weights. We initialize the policy weights with the improved ABC algorithm. The initial value technique can prevent problems such as getting stuck in the local optimum. Although ABC serves well in most tasks, there is still a weakness in the ABC algorithm that disregards the fitness of related pairs of individuals in discovering a neighboring food source position. Therefore, this paper also proposes a mutual learning technique that modifies the produced candidate food source with the higher fitness between two individuals selected by a mutual learning factor. We tested our model on three datasets, LegalQA, TrecQA, and WikiQA, and the results show that RLAS-BIABC can be recognized as a state-of-the-art method.
format Online
Article
Text
id pubmed-9106472
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Hindawi
record_format MEDLINE/PubMed
spelling pubmed-91064722022-05-14 RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the BERT Model Boosted by an Improved ABC Algorithm Gharagozlou, Hamid Mohammadzadeh, Javad Bastanfard, Azam Ghidary, Saeed Shiry Comput Intell Neurosci Research Article Answer selection (AS) is a critical subtask of the open-domain question answering (QA) problem. The present paper proposes a method called RLAS-BIABC for AS, which is established on attention mechanism-based long short-term memory (LSTM) and the bidirectional encoder representations from transformers (BERT) word embedding, enriched by an improved artificial bee colony (ABC) algorithm for pretraining and a reinforcement learning-based algorithm for training backpropagation (BP) algorithm. BERT can be comprised in downstream work and fine-tuned as a united task-specific architecture, and the pretrained BERT model can grab different linguistic effects. Existing algorithms typically train the AS model with positive-negative pairs for a two-class classifier. A positive pair contains a question and a genuine answer, while a negative one includes a question and a fake answer. The output should be one for positive and zero for negative pairs. Typically, negative pairs are more than positive, leading to an imbalanced classification that drastically reduces system performance. To deal with it, we define classification as a sequential decision-making process in which the agent takes a sample at each step and classifies it. For each classification operation, the agent receives a reward, in which the prize of the majority class is less than the reward of the minority class. Ultimately, the agent finds the optimal value for the policy weights. We initialize the policy weights with the improved ABC algorithm. The initial value technique can prevent problems such as getting stuck in the local optimum. Although ABC serves well in most tasks, there is still a weakness in the ABC algorithm that disregards the fitness of related pairs of individuals in discovering a neighboring food source position. Therefore, this paper also proposes a mutual learning technique that modifies the produced candidate food source with the higher fitness between two individuals selected by a mutual learning factor. We tested our model on three datasets, LegalQA, TrecQA, and WikiQA, and the results show that RLAS-BIABC can be recognized as a state-of-the-art method. Hindawi 2022-05-06 /pmc/articles/PMC9106472/ /pubmed/35571722 http://dx.doi.org/10.1155/2022/7839840 Text en Copyright © 2022 Hamid Gharagozlou et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Research Article
Gharagozlou, Hamid
Mohammadzadeh, Javad
Bastanfard, Azam
Ghidary, Saeed Shiry
RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the BERT Model Boosted by an Improved ABC Algorithm
title RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the BERT Model Boosted by an Improved ABC Algorithm
title_full RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the BERT Model Boosted by an Improved ABC Algorithm
title_fullStr RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the BERT Model Boosted by an Improved ABC Algorithm
title_full_unstemmed RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the BERT Model Boosted by an Improved ABC Algorithm
title_short RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the BERT Model Boosted by an Improved ABC Algorithm
title_sort rlas-biabc: a reinforcement learning-based answer selection using the bert model boosted by an improved abc algorithm
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9106472/
https://www.ncbi.nlm.nih.gov/pubmed/35571722
http://dx.doi.org/10.1155/2022/7839840
work_keys_str_mv AT gharagozlouhamid rlasbiabcareinforcementlearningbasedanswerselectionusingthebertmodelboostedbyanimprovedabcalgorithm
AT mohammadzadehjavad rlasbiabcareinforcementlearningbasedanswerselectionusingthebertmodelboostedbyanimprovedabcalgorithm
AT bastanfardazam rlasbiabcareinforcementlearningbasedanswerselectionusingthebertmodelboostedbyanimprovedabcalgorithm
AT ghidarysaeedshiry rlasbiabcareinforcementlearningbasedanswerselectionusingthebertmodelboostedbyanimprovedabcalgorithm