Cargando…

A Multimodel-Based Deep Learning Framework for Short Text Multiclass Classification with the Imbalanced and Extremely Small Data Set

Text classification plays an important role in many practical applications. In the real world, there are extremely small datasets. Most existing methods adopt pretrained neural network models to handle this kind of dataset. However, these methods are either difficult to deploy on mobile devices beca...

Descripción completa

Detalles Bibliográficos
Autores principales: Tong, Jiajun, Wang, Zhixiao, Rui, Xiaobin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Hindawi 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9560856/
https://www.ncbi.nlm.nih.gov/pubmed/36248939
http://dx.doi.org/10.1155/2022/7183207
_version_ 1784807845374459904
author Tong, Jiajun
Wang, Zhixiao
Rui, Xiaobin
author_facet Tong, Jiajun
Wang, Zhixiao
Rui, Xiaobin
author_sort Tong, Jiajun
collection PubMed
description Text classification plays an important role in many practical applications. In the real world, there are extremely small datasets. Most existing methods adopt pretrained neural network models to handle this kind of dataset. However, these methods are either difficult to deploy on mobile devices because of their large output size or cannot fully extract the deep semantic information between phrases and clauses. This paper proposes a multimodel-based deep learning framework for short-text multiclass classification with an imbalanced and extremely small dataset. Our framework mainly includes five layers: the encoder layer, the word-level LSTM network layer, the sentence-level LSTM network layer, the max-pooling layer, and the SoftMax layer. The encoder layer uses DistilBERT to obtain context-sensitive dynamic word vectors that are difficult to represent in traditional feature engineering methods. Since the transformer part of this layer is distilled, our framework is compressed. Then, we use the next two layers to extract deep semantic information. The output of the encoder layer is sent to a bidirectional LSTM network, and the feature matrix is extracted hierarchically through the LSTM at the word and sentence level to obtain the fine-grained semantic representation. After that, the max-pooling layer converts the feature matrix into a lower-dimensional matrix, preserving only the obvious features. Finally, the feature matrix is taken as the input of a fully connected SoftMax layer, which contains a function that can convert the predicted linear vector into the output value as the probability of the text in each classification. Extensive experiments on two public benchmarks demonstrate the effectiveness of our proposed approach on an extremely small dataset. It retains the state-of-the-art baseline performance in terms of precision, recall, accuracy, and F1 score, and through the model size, training time, and convergence epoch, we can conclude that our method can be deployed faster and lighter on mobile devices.
format Online
Article
Text
id pubmed-9560856
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Hindawi
record_format MEDLINE/PubMed
spelling pubmed-95608562022-10-14 A Multimodel-Based Deep Learning Framework for Short Text Multiclass Classification with the Imbalanced and Extremely Small Data Set Tong, Jiajun Wang, Zhixiao Rui, Xiaobin Comput Intell Neurosci Research Article Text classification plays an important role in many practical applications. In the real world, there are extremely small datasets. Most existing methods adopt pretrained neural network models to handle this kind of dataset. However, these methods are either difficult to deploy on mobile devices because of their large output size or cannot fully extract the deep semantic information between phrases and clauses. This paper proposes a multimodel-based deep learning framework for short-text multiclass classification with an imbalanced and extremely small dataset. Our framework mainly includes five layers: the encoder layer, the word-level LSTM network layer, the sentence-level LSTM network layer, the max-pooling layer, and the SoftMax layer. The encoder layer uses DistilBERT to obtain context-sensitive dynamic word vectors that are difficult to represent in traditional feature engineering methods. Since the transformer part of this layer is distilled, our framework is compressed. Then, we use the next two layers to extract deep semantic information. The output of the encoder layer is sent to a bidirectional LSTM network, and the feature matrix is extracted hierarchically through the LSTM at the word and sentence level to obtain the fine-grained semantic representation. After that, the max-pooling layer converts the feature matrix into a lower-dimensional matrix, preserving only the obvious features. Finally, the feature matrix is taken as the input of a fully connected SoftMax layer, which contains a function that can convert the predicted linear vector into the output value as the probability of the text in each classification. Extensive experiments on two public benchmarks demonstrate the effectiveness of our proposed approach on an extremely small dataset. It retains the state-of-the-art baseline performance in terms of precision, recall, accuracy, and F1 score, and through the model size, training time, and convergence epoch, we can conclude that our method can be deployed faster and lighter on mobile devices. Hindawi 2022-10-06 /pmc/articles/PMC9560856/ /pubmed/36248939 http://dx.doi.org/10.1155/2022/7183207 Text en Copyright © 2022 Jiajun Tong et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Research Article
Tong, Jiajun
Wang, Zhixiao
Rui, Xiaobin
A Multimodel-Based Deep Learning Framework for Short Text Multiclass Classification with the Imbalanced and Extremely Small Data Set
title A Multimodel-Based Deep Learning Framework for Short Text Multiclass Classification with the Imbalanced and Extremely Small Data Set
title_full A Multimodel-Based Deep Learning Framework for Short Text Multiclass Classification with the Imbalanced and Extremely Small Data Set
title_fullStr A Multimodel-Based Deep Learning Framework for Short Text Multiclass Classification with the Imbalanced and Extremely Small Data Set
title_full_unstemmed A Multimodel-Based Deep Learning Framework for Short Text Multiclass Classification with the Imbalanced and Extremely Small Data Set
title_short A Multimodel-Based Deep Learning Framework for Short Text Multiclass Classification with the Imbalanced and Extremely Small Data Set
title_sort multimodel-based deep learning framework for short text multiclass classification with the imbalanced and extremely small data set
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9560856/
https://www.ncbi.nlm.nih.gov/pubmed/36248939
http://dx.doi.org/10.1155/2022/7183207
work_keys_str_mv AT tongjiajun amultimodelbaseddeeplearningframeworkforshorttextmulticlassclassificationwiththeimbalancedandextremelysmalldataset
AT wangzhixiao amultimodelbaseddeeplearningframeworkforshorttextmulticlassclassificationwiththeimbalancedandextremelysmalldataset
AT ruixiaobin amultimodelbaseddeeplearningframeworkforshorttextmulticlassclassificationwiththeimbalancedandextremelysmalldataset
AT tongjiajun multimodelbaseddeeplearningframeworkforshorttextmulticlassclassificationwiththeimbalancedandextremelysmalldataset
AT wangzhixiao multimodelbaseddeeplearningframeworkforshorttextmulticlassclassificationwiththeimbalancedandextremelysmalldataset
AT ruixiaobin multimodelbaseddeeplearningframeworkforshorttextmulticlassclassificationwiththeimbalancedandextremelysmalldataset