Cargando…
Two novel nonlinear multivariate grey models with kernel learning for small-sample time series prediction
For many applications, small-sample time series prediction based on grey forecasting models has become indispensable. Many algorithms have been developed recently to make them effective. Each of these methods has a specialized application depending on the properties of the time series that need to b...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer Netherlands
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9958329/ https://www.ncbi.nlm.nih.gov/pubmed/37025646 http://dx.doi.org/10.1007/s11071-023-08296-y |
_version_ | 1784894998965125120 |
---|---|
author | Wang, Lan Li, Nan Xie, Ming Wu, Lifeng |
author_facet | Wang, Lan Li, Nan Xie, Ming Wu, Lifeng |
author_sort | Wang, Lan |
collection | PubMed |
description | For many applications, small-sample time series prediction based on grey forecasting models has become indispensable. Many algorithms have been developed recently to make them effective. Each of these methods has a specialized application depending on the properties of the time series that need to be inferred. In order to develop a generalized nonlinear multivariable grey model with higher compatibility and generalization performance, we realize the nonlinearization of traditional GM(1,N), and we call it NGM(1,N). The unidentified nonlinear function that maps the data into a better representational space is present in both the NGM(1,N) and its response function. The original optimization problem with linear equality constraints is established in terms of parameter estimation for the NGM(1,N), and two different approaches are taken to solve it. The former is the Lagrange multiplier method, which converts the optimization problem into a linear system to be solved; and the latter is the standard dualization method utilizing Lagrange multipliers, that uses a flexible estimation equation for the development coefficient. As the size of the training data increases, the estimation results of the potential development coefficient get richer and the final estimation results using the average value are more reliable. The kernel function expresses the dot product of two unidentified nonlinear functions during the solving process, greatly lowering the computational complexity of nonlinear functions. Three numerical examples show that the LDNGM(1,N) outperforms the other multivariate grey models compared in terms of generalization performance. The duality theory and framework with kernel learning are instructive for further research around multivariate grey models to follow. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s11071-023-08296-y. |
format | Online Article Text |
id | pubmed-9958329 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Springer Netherlands |
record_format | MEDLINE/PubMed |
spelling | pubmed-99583292023-02-28 Two novel nonlinear multivariate grey models with kernel learning for small-sample time series prediction Wang, Lan Li, Nan Xie, Ming Wu, Lifeng Nonlinear Dyn Original Paper For many applications, small-sample time series prediction based on grey forecasting models has become indispensable. Many algorithms have been developed recently to make them effective. Each of these methods has a specialized application depending on the properties of the time series that need to be inferred. In order to develop a generalized nonlinear multivariable grey model with higher compatibility and generalization performance, we realize the nonlinearization of traditional GM(1,N), and we call it NGM(1,N). The unidentified nonlinear function that maps the data into a better representational space is present in both the NGM(1,N) and its response function. The original optimization problem with linear equality constraints is established in terms of parameter estimation for the NGM(1,N), and two different approaches are taken to solve it. The former is the Lagrange multiplier method, which converts the optimization problem into a linear system to be solved; and the latter is the standard dualization method utilizing Lagrange multipliers, that uses a flexible estimation equation for the development coefficient. As the size of the training data increases, the estimation results of the potential development coefficient get richer and the final estimation results using the average value are more reliable. The kernel function expresses the dot product of two unidentified nonlinear functions during the solving process, greatly lowering the computational complexity of nonlinear functions. Three numerical examples show that the LDNGM(1,N) outperforms the other multivariate grey models compared in terms of generalization performance. The duality theory and framework with kernel learning are instructive for further research around multivariate grey models to follow. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s11071-023-08296-y. Springer Netherlands 2023-02-25 2023 /pmc/articles/PMC9958329/ /pubmed/37025646 http://dx.doi.org/10.1007/s11071-023-08296-y Text en © The Author(s), under exclusive licence to Springer Nature B.V. 2023, Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic. |
spellingShingle | Original Paper Wang, Lan Li, Nan Xie, Ming Wu, Lifeng Two novel nonlinear multivariate grey models with kernel learning for small-sample time series prediction |
title | Two novel nonlinear multivariate grey models with kernel learning for small-sample time series prediction |
title_full | Two novel nonlinear multivariate grey models with kernel learning for small-sample time series prediction |
title_fullStr | Two novel nonlinear multivariate grey models with kernel learning for small-sample time series prediction |
title_full_unstemmed | Two novel nonlinear multivariate grey models with kernel learning for small-sample time series prediction |
title_short | Two novel nonlinear multivariate grey models with kernel learning for small-sample time series prediction |
title_sort | two novel nonlinear multivariate grey models with kernel learning for small-sample time series prediction |
topic | Original Paper |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9958329/ https://www.ncbi.nlm.nih.gov/pubmed/37025646 http://dx.doi.org/10.1007/s11071-023-08296-y |
work_keys_str_mv | AT wanglan twonovelnonlinearmultivariategreymodelswithkernellearningforsmallsampletimeseriesprediction AT linan twonovelnonlinearmultivariategreymodelswithkernellearningforsmallsampletimeseriesprediction AT xieming twonovelnonlinearmultivariategreymodelswithkernellearningforsmallsampletimeseriesprediction AT wulifeng twonovelnonlinearmultivariategreymodelswithkernellearningforsmallsampletimeseriesprediction |