Cargando…

Multi-Kernel Temporal and Spatial Convolution for EEG-Based Emotion Classification

Deep learning using an end-to-end convolutional neural network (ConvNet) has been applied to several electroencephalography (EEG)-based brain–computer interface tasks to extract feature maps and classify the target output. However, the EEG analysis remains challenging since it requires consideration...

Descripción completa

Detalles Bibliográficos
Autores principales: Emsawas, Taweesak, Morita, Takashi, Kimura, Tsukasa, Fukui, Ken-ichi, Numao, Masayuki
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9654218/
https://www.ncbi.nlm.nih.gov/pubmed/36365948
http://dx.doi.org/10.3390/s22218250
_version_ 1784828875686019072
author Emsawas, Taweesak
Morita, Takashi
Kimura, Tsukasa
Fukui, Ken-ichi
Numao, Masayuki
author_facet Emsawas, Taweesak
Morita, Takashi
Kimura, Tsukasa
Fukui, Ken-ichi
Numao, Masayuki
author_sort Emsawas, Taweesak
collection PubMed
description Deep learning using an end-to-end convolutional neural network (ConvNet) has been applied to several electroencephalography (EEG)-based brain–computer interface tasks to extract feature maps and classify the target output. However, the EEG analysis remains challenging since it requires consideration of various architectural design components that influence the representational ability of extracted features. This study proposes an EEG-based emotion classification model called the multi-kernel temporal and spatial convolution network (MultiT-S ConvNet). The multi-scale kernel is used in the model to learn various time resolutions, and separable convolutions are applied to find related spatial patterns. In addition, we enhanced both the temporal and spatial filters with a lightweight gating mechanism. To validate the performance and classification accuracy of MultiT-S ConvNet, we conduct subject-dependent and subject-independent experiments on EEG-based emotion datasets: DEAP and SEED. Compared with existing methods, MultiT-S ConvNet outperforms with higher accuracy results and a few trainable parameters. Moreover, the proposed multi-scale module in temporal filtering enables extracting a wide range of EEG representations, covering short- to long-wavelength components. This module could be further implemented in any model of EEG-based convolution networks, and its ability potentially improves the model’s learning capacity.
format Online
Article
Text
id pubmed-9654218
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-96542182022-11-15 Multi-Kernel Temporal and Spatial Convolution for EEG-Based Emotion Classification Emsawas, Taweesak Morita, Takashi Kimura, Tsukasa Fukui, Ken-ichi Numao, Masayuki Sensors (Basel) Article Deep learning using an end-to-end convolutional neural network (ConvNet) has been applied to several electroencephalography (EEG)-based brain–computer interface tasks to extract feature maps and classify the target output. However, the EEG analysis remains challenging since it requires consideration of various architectural design components that influence the representational ability of extracted features. This study proposes an EEG-based emotion classification model called the multi-kernel temporal and spatial convolution network (MultiT-S ConvNet). The multi-scale kernel is used in the model to learn various time resolutions, and separable convolutions are applied to find related spatial patterns. In addition, we enhanced both the temporal and spatial filters with a lightweight gating mechanism. To validate the performance and classification accuracy of MultiT-S ConvNet, we conduct subject-dependent and subject-independent experiments on EEG-based emotion datasets: DEAP and SEED. Compared with existing methods, MultiT-S ConvNet outperforms with higher accuracy results and a few trainable parameters. Moreover, the proposed multi-scale module in temporal filtering enables extracting a wide range of EEG representations, covering short- to long-wavelength components. This module could be further implemented in any model of EEG-based convolution networks, and its ability potentially improves the model’s learning capacity. MDPI 2022-10-27 /pmc/articles/PMC9654218/ /pubmed/36365948 http://dx.doi.org/10.3390/s22218250 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Emsawas, Taweesak
Morita, Takashi
Kimura, Tsukasa
Fukui, Ken-ichi
Numao, Masayuki
Multi-Kernel Temporal and Spatial Convolution for EEG-Based Emotion Classification
title Multi-Kernel Temporal and Spatial Convolution for EEG-Based Emotion Classification
title_full Multi-Kernel Temporal and Spatial Convolution for EEG-Based Emotion Classification
title_fullStr Multi-Kernel Temporal and Spatial Convolution for EEG-Based Emotion Classification
title_full_unstemmed Multi-Kernel Temporal and Spatial Convolution for EEG-Based Emotion Classification
title_short Multi-Kernel Temporal and Spatial Convolution for EEG-Based Emotion Classification
title_sort multi-kernel temporal and spatial convolution for eeg-based emotion classification
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9654218/
https://www.ncbi.nlm.nih.gov/pubmed/36365948
http://dx.doi.org/10.3390/s22218250
work_keys_str_mv AT emsawastaweesak multikerneltemporalandspatialconvolutionforeegbasedemotionclassification
AT moritatakashi multikerneltemporalandspatialconvolutionforeegbasedemotionclassification
AT kimuratsukasa multikerneltemporalandspatialconvolutionforeegbasedemotionclassification
AT fukuikenichi multikerneltemporalandspatialconvolutionforeegbasedemotionclassification
AT numaomasayuki multikerneltemporalandspatialconvolutionforeegbasedemotionclassification