Cargando…

Building a Discourse-Argument Hybrid System for Vietnamese Why-Question Answering

Recently, many deep learning models have archived high results in question answering task with overall F(1) scores above 0.88 on SQuAD datasets. However, many of these models have quite low F(1) scores on why-questions. These F(1) scores range from 0.57 to 0.7 on SQuAD v1.1 development set. This mea...

Descripción completa

Detalles Bibliográficos
Autores principales: Nguyen, Chinh Trong, Nguyen, Dang Tuan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Hindawi 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8727104/
https://www.ncbi.nlm.nih.gov/pubmed/34992649
http://dx.doi.org/10.1155/2021/6550871
Descripción
Sumario:Recently, many deep learning models have archived high results in question answering task with overall F(1) scores above 0.88 on SQuAD datasets. However, many of these models have quite low F(1) scores on why-questions. These F(1) scores range from 0.57 to 0.7 on SQuAD v1.1 development set. This means these models are more appropriate to the extraction of answers for factoid questions than for why-questions. Why-questions are asked when explanations are needed. These explanations are possibly arguments or simply subjective opinions. Therefore, we propose an approach to finding the answer for why-question using discourse analysis and natural language inference. In our approach, natural language inference is applied to identify implicit arguments at sentence level. It is also applied in sentence similarity calculation. Discourse analysis is applied to identify the explicit arguments and the opinions at sentence level in documents. The results from these two methods are the answer candidates to be selected as the final answer for each why-question. We also implement a system with our approach. Our system can provide an answer for a why-question and a document as in reading comprehension test. We test our system with a Vietnamese translated test set which contains all why-questions of SQuAD v1.1 development set. The test results show that our system cannot beat a deep learning model in F(1) score; however, our system can answer more questions (answer rate of 77.0%) than the deep learning model (answer rate of 61.0%).