Cargando…

Multi-Task Learning Model for Kazakh Query Understanding

Query understanding (QU) plays a vital role in natural language processing, particularly in regard to question answering and dialogue systems. QU finds the named entity and query intent in users’ questions. Traditional pipeline approaches manage the two mentioned tasks, namely, the named entity reco...

Descripción completa

Detalles Bibliográficos
Autores principales: Haisa, Gulizada, Altenbek, Gulila
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9785505/
https://www.ncbi.nlm.nih.gov/pubmed/36560177
http://dx.doi.org/10.3390/s22249810
_version_ 1784858065123672064
author Haisa, Gulizada
Altenbek, Gulila
author_facet Haisa, Gulizada
Altenbek, Gulila
author_sort Haisa, Gulizada
collection PubMed
description Query understanding (QU) plays a vital role in natural language processing, particularly in regard to question answering and dialogue systems. QU finds the named entity and query intent in users’ questions. Traditional pipeline approaches manage the two mentioned tasks, namely, the named entity recognition (NER) and the question classification (QC), separately. NER is seen as a sequence labeling task to predict a keyword, while QC is a semantic classification task to predict the user’s intent. Considering the correlation between these two tasks, training them together could be of benefit to both of them. Kazakh is a low-resource language with wealthy lexical and agglutinative characteristics. We argue that current QU techniques restrict the power of the word-level and sentence-level features of agglutinative languages, especially the stem, suffixes, POS, and gazetteers. This paper proposes a new multi-task learning model for query understanding (MTQU). The MTQU model is designed to establish direct connections for QC and NER tasks to help them promote each other mutually, while we also designed a multi-feature input layer that significantly influenced the model’s performance during training. In addition, we constructed new corpora for the Kazakh query understanding task, namely, the KQU. As a result, the MTQU model is simple and effective and obtains competitive results for the KQU.
format Online
Article
Text
id pubmed-9785505
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-97855052022-12-24 Multi-Task Learning Model for Kazakh Query Understanding Haisa, Gulizada Altenbek, Gulila Sensors (Basel) Article Query understanding (QU) plays a vital role in natural language processing, particularly in regard to question answering and dialogue systems. QU finds the named entity and query intent in users’ questions. Traditional pipeline approaches manage the two mentioned tasks, namely, the named entity recognition (NER) and the question classification (QC), separately. NER is seen as a sequence labeling task to predict a keyword, while QC is a semantic classification task to predict the user’s intent. Considering the correlation between these two tasks, training them together could be of benefit to both of them. Kazakh is a low-resource language with wealthy lexical and agglutinative characteristics. We argue that current QU techniques restrict the power of the word-level and sentence-level features of agglutinative languages, especially the stem, suffixes, POS, and gazetteers. This paper proposes a new multi-task learning model for query understanding (MTQU). The MTQU model is designed to establish direct connections for QC and NER tasks to help them promote each other mutually, while we also designed a multi-feature input layer that significantly influenced the model’s performance during training. In addition, we constructed new corpora for the Kazakh query understanding task, namely, the KQU. As a result, the MTQU model is simple and effective and obtains competitive results for the KQU. MDPI 2022-12-14 /pmc/articles/PMC9785505/ /pubmed/36560177 http://dx.doi.org/10.3390/s22249810 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Haisa, Gulizada
Altenbek, Gulila
Multi-Task Learning Model for Kazakh Query Understanding
title Multi-Task Learning Model for Kazakh Query Understanding
title_full Multi-Task Learning Model for Kazakh Query Understanding
title_fullStr Multi-Task Learning Model for Kazakh Query Understanding
title_full_unstemmed Multi-Task Learning Model for Kazakh Query Understanding
title_short Multi-Task Learning Model for Kazakh Query Understanding
title_sort multi-task learning model for kazakh query understanding
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9785505/
https://www.ncbi.nlm.nih.gov/pubmed/36560177
http://dx.doi.org/10.3390/s22249810
work_keys_str_mv AT haisagulizada multitasklearningmodelforkazakhqueryunderstanding
AT altenbekgulila multitasklearningmodelforkazakhqueryunderstanding