Cargando…

A Hybrid EEG-based Emotion Recognition Approach Using Wavelet Convolutional Neural Networks and Support Vector Machine

INTRODUCTION: Nowadays, deep learning and convolutional neural networks (CNNs) have become widespread tools in many biomedical engineering studies. CNN is an end-to-end tool, which makes the processing procedure integrated, but in some situations, this processing tool requires to be fused with machi...

Descripción completa

Detalles Bibliográficos
Autores principales: Bagherzadeh, Sara, Maghooli, Keivan, Shalbaf, Ahmad, Maghsoudi, Arash
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Iranian Neuroscience Society 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10279985/
https://www.ncbi.nlm.nih.gov/pubmed/37346875
http://dx.doi.org/10.32598/bcn.2021.3133.1
Descripción
Sumario:INTRODUCTION: Nowadays, deep learning and convolutional neural networks (CNNs) have become widespread tools in many biomedical engineering studies. CNN is an end-to-end tool, which makes the processing procedure integrated, but in some situations, this processing tool requires to be fused with machine learning methods to be more accurate. METHODS: In this paper, a hybrid approach based on deep features extracted from wavelet CNNs (WCNNs) weighted layers and multiclass support vector machine (MSVM) was proposed to improve the recognition of emotional states from electroencephalogram (EEG) signals. First, EEG signals were preprocessed and converted to Time-Frequency (T-F) color representation or scalogram using the continuous wavelet transform (CWT) method. Then, scalograms were fed into four popular pre-trained CNNs, AlexNet, ResNet-18, VGG-19, and Inception-v3 to fine-tune them. Then, the best feature layer from each one was used as input to the MSVM method to classify four quarters of the valence-arousal model. Finally, the subject-independent leave-one-subject-out criterion was used to evaluate the proposed method on DEAP and MAHNOB-HCI databases. RESULTS: Results showed that extracting deep features from the earlier convolutional layer of ResNet-18 (Res2a) and classifying using the MSVM increased the average accuracy, precision, and recall by about 20% and 12% for MAHNOB-HCI and DEAP databases, respectively. Also, combining scalograms from four regions of pre-frontal, frontal, parietal, and parietal-occipital and two regions of frontal and parietal achieved the higher average accuracy of 77.47% and 87.45% for MAHNOB-HCI and DEAP databases, respectively. CONCLUSION: Combining CNN and MSVM increased the recognition of emotion from EEG signals and the results were comparable to state-of-the art studies.