Cargando…

Goal selection and feedback for solving math word problems

Solving Math Word Problems (MWPs) automatically is a challenging task for AI-tutoring in online education. Most of the existing State-Of-The-Art (SOTA) neural models for solving MWPs use Goal-driven Tree-structured Solver (GTS) as their decoders. However, owing to the defects of the tree-structured...

Descripción completa

Detalles Bibliográficos
Autores principales: He, Daijun, Xiao, Jing
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9628340/
https://www.ncbi.nlm.nih.gov/pubmed/36340422
http://dx.doi.org/10.1007/s10489-022-04253-1
_version_ 1784823173860032512
author He, Daijun
Xiao, Jing
author_facet He, Daijun
Xiao, Jing
author_sort He, Daijun
collection PubMed
description Solving Math Word Problems (MWPs) automatically is a challenging task for AI-tutoring in online education. Most of the existing State-Of-The-Art (SOTA) neural models for solving MWPs use Goal-driven Tree-structured Solver (GTS) as their decoders. However, owing to the defects of the tree-structured recurrent neural networks, GTS can not obtain the information of all generated nodes in each decoding time step. Therefore, the performance for long math expressions is not satisfactory enough. To address such limitations, we propose a Goal Selection and Feedback (GSF) decoding module. In each time step of GSF, we firstly feed the latest result back to all goal vectors through goal feedback operation, and then the goal selection operation based on attention mechanism is designed for generate the new goal vector. Not only can the decoder collect the historical information from all generated nodes through goal selection operation, but also these generated nodes are always updated timely by goal feedback operation. In addition, a Multilayer Fusion Network (MFN) is proposed to provide a better representation of each hidden state during decoding. Combining the ELECTRA language model with our novel decoder, experiments on the Math23k, Ape-clean, and MAWPS datasets show that our model outperforms the SOTA baselines, especially on the MWPs of complex samples with long math expressions. The ablation study and case study further verify that our model can better solve the samples with long expressions, and the proposed components are indeed able to help enhance the performance of the model.
format Online
Article
Text
id pubmed-9628340
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Springer US
record_format MEDLINE/PubMed
spelling pubmed-96283402022-11-02 Goal selection and feedback for solving math word problems He, Daijun Xiao, Jing Appl Intell (Dordr) Article Solving Math Word Problems (MWPs) automatically is a challenging task for AI-tutoring in online education. Most of the existing State-Of-The-Art (SOTA) neural models for solving MWPs use Goal-driven Tree-structured Solver (GTS) as their decoders. However, owing to the defects of the tree-structured recurrent neural networks, GTS can not obtain the information of all generated nodes in each decoding time step. Therefore, the performance for long math expressions is not satisfactory enough. To address such limitations, we propose a Goal Selection and Feedback (GSF) decoding module. In each time step of GSF, we firstly feed the latest result back to all goal vectors through goal feedback operation, and then the goal selection operation based on attention mechanism is designed for generate the new goal vector. Not only can the decoder collect the historical information from all generated nodes through goal selection operation, but also these generated nodes are always updated timely by goal feedback operation. In addition, a Multilayer Fusion Network (MFN) is proposed to provide a better representation of each hidden state during decoding. Combining the ELECTRA language model with our novel decoder, experiments on the Math23k, Ape-clean, and MAWPS datasets show that our model outperforms the SOTA baselines, especially on the MWPs of complex samples with long math expressions. The ablation study and case study further verify that our model can better solve the samples with long expressions, and the proposed components are indeed able to help enhance the performance of the model. Springer US 2022-11-02 2023 /pmc/articles/PMC9628340/ /pubmed/36340422 http://dx.doi.org/10.1007/s10489-022-04253-1 Text en © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022. Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Article
He, Daijun
Xiao, Jing
Goal selection and feedback for solving math word problems
title Goal selection and feedback for solving math word problems
title_full Goal selection and feedback for solving math word problems
title_fullStr Goal selection and feedback for solving math word problems
title_full_unstemmed Goal selection and feedback for solving math word problems
title_short Goal selection and feedback for solving math word problems
title_sort goal selection and feedback for solving math word problems
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9628340/
https://www.ncbi.nlm.nih.gov/pubmed/36340422
http://dx.doi.org/10.1007/s10489-022-04253-1
work_keys_str_mv AT hedaijun goalselectionandfeedbackforsolvingmathwordproblems
AT xiaojing goalselectionandfeedbackforsolvingmathwordproblems