Cargando…

Sparse Logistic Regression With L(1/2) Penalty for Emotion Recognition in Electroencephalography Classification

Emotion recognition based on electroencephalography (EEG) signals is a current focus in brain-computer interface research. However, the classification of EEG is difficult owing to large amounts of data and high levels of noise. Therefore, it is important to determine how to effectively extract featu...

Descripción completa

Detalles Bibliográficos
Autores principales: Chen, Dong-Wei, Miao, Rui, Deng, Zhao-Yong, Lu, Yue-Yue, Liang, Yong, Huang, Lan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7427509/
https://www.ncbi.nlm.nih.gov/pubmed/32848688
http://dx.doi.org/10.3389/fninf.2020.00029
Descripción
Sumario:Emotion recognition based on electroencephalography (EEG) signals is a current focus in brain-computer interface research. However, the classification of EEG is difficult owing to large amounts of data and high levels of noise. Therefore, it is important to determine how to effectively extract features that include important information. Regularization, one of the effective methods for EEG signal processing, can effectively extract important features from the signal and has potential applications in EEG emotion recognition. Currently, the most popular regularization technique is Lasso (L(1)) and Ridge Regression (L(2)). In recent years, researchers have proposed many other regularization terms. In theory, L(q)-type regularization has a lower q value, which means that it can be used to find solutions with better sparsity. L(1/2) regularization is of L(q) type (0 < q < 1) and has been shown to have many attractive properties. In this work, we studied the L(1/2) penalty in sparse logistic regression for three-classification EEG emotion recognition, and used a coordinate descent algorithm and a univariate semi-threshold operator to implement L(1/2) penalty logistic regression. The experimental results on simulation and real data demonstrate that our proposed method is better than other existing regularization methods. Sparse logistic regression with L(1/2) penalty achieves higher classification accuracy than the conventional L(1), Ridge Regression, and Elastic Net regularization methods, using fewer but more informative EEG signals. This is very important for high-dimensional small-sample EEG data and can help researchers to reduce computational complexity and improve computational accuracy. Therefore, we propose that sparse logistic regression with the L(1/2) penalty is an effective technique for emotion recognition in practical classification problems.