Cargando…

Identification of emotions evoked by music via spatial-temporal transformer in multi-channel EEG signals

INTRODUCTION: Emotion plays a vital role in understanding activities and associations. Due to being non-invasive, many experts have employed EEG signals as a reliable technique for emotion recognition. Identifying emotions from multi-channel EEG signals is evolving into a crucial task for diagnosing...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhou, Yanan, Lian, Jian
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10358766/
https://www.ncbi.nlm.nih.gov/pubmed/37483354
http://dx.doi.org/10.3389/fnins.2023.1188696
_version_ 1785075735807918080
author Zhou, Yanan
Lian, Jian
author_facet Zhou, Yanan
Lian, Jian
author_sort Zhou, Yanan
collection PubMed
description INTRODUCTION: Emotion plays a vital role in understanding activities and associations. Due to being non-invasive, many experts have employed EEG signals as a reliable technique for emotion recognition. Identifying emotions from multi-channel EEG signals is evolving into a crucial task for diagnosing emotional disorders in neuroscience. One challenge with automated emotion recognition in EEG signals is to extract and select the discriminating features to classify different emotions accurately. METHODS: In this study, we proposed a novel Transformer model for identifying emotions from multi-channel EEG signals. Note that we directly fed the raw EEG signal into the proposed Transformer, which aims at eliminating the issues caused by the local receptive fields in the convolutional neural networks. The presented deep learning model consists of two separate channels to address the spatial and temporal information in the EEG signals, respectively. RESULTS: In the experiments, we first collected the EEG recordings from 20 subjects during listening to music. Experimental results of the proposed approach for binary emotion classification (positive and negative) and ternary emotion classification (positive, negative, and neutral) indicated the accuracy of 97.3 and 97.1%, respectively. We conducted comparison experiments on the same dataset using the proposed method and state-of-the-art techniques. Moreover, we achieved a promising outcome in comparison with these approaches. DISCUSSION: Due to the performance of the proposed approach, it can be a potentially valuable instrument for human-computer interface system.
format Online
Article
Text
id pubmed-10358766
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-103587662023-07-21 Identification of emotions evoked by music via spatial-temporal transformer in multi-channel EEG signals Zhou, Yanan Lian, Jian Front Neurosci Neuroscience INTRODUCTION: Emotion plays a vital role in understanding activities and associations. Due to being non-invasive, many experts have employed EEG signals as a reliable technique for emotion recognition. Identifying emotions from multi-channel EEG signals is evolving into a crucial task for diagnosing emotional disorders in neuroscience. One challenge with automated emotion recognition in EEG signals is to extract and select the discriminating features to classify different emotions accurately. METHODS: In this study, we proposed a novel Transformer model for identifying emotions from multi-channel EEG signals. Note that we directly fed the raw EEG signal into the proposed Transformer, which aims at eliminating the issues caused by the local receptive fields in the convolutional neural networks. The presented deep learning model consists of two separate channels to address the spatial and temporal information in the EEG signals, respectively. RESULTS: In the experiments, we first collected the EEG recordings from 20 subjects during listening to music. Experimental results of the proposed approach for binary emotion classification (positive and negative) and ternary emotion classification (positive, negative, and neutral) indicated the accuracy of 97.3 and 97.1%, respectively. We conducted comparison experiments on the same dataset using the proposed method and state-of-the-art techniques. Moreover, we achieved a promising outcome in comparison with these approaches. DISCUSSION: Due to the performance of the proposed approach, it can be a potentially valuable instrument for human-computer interface system. Frontiers Media S.A. 2023-07-06 /pmc/articles/PMC10358766/ /pubmed/37483354 http://dx.doi.org/10.3389/fnins.2023.1188696 Text en Copyright © 2023 Zhou and Lian. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Zhou, Yanan
Lian, Jian
Identification of emotions evoked by music via spatial-temporal transformer in multi-channel EEG signals
title Identification of emotions evoked by music via spatial-temporal transformer in multi-channel EEG signals
title_full Identification of emotions evoked by music via spatial-temporal transformer in multi-channel EEG signals
title_fullStr Identification of emotions evoked by music via spatial-temporal transformer in multi-channel EEG signals
title_full_unstemmed Identification of emotions evoked by music via spatial-temporal transformer in multi-channel EEG signals
title_short Identification of emotions evoked by music via spatial-temporal transformer in multi-channel EEG signals
title_sort identification of emotions evoked by music via spatial-temporal transformer in multi-channel eeg signals
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10358766/
https://www.ncbi.nlm.nih.gov/pubmed/37483354
http://dx.doi.org/10.3389/fnins.2023.1188696
work_keys_str_mv AT zhouyanan identificationofemotionsevokedbymusicviaspatialtemporaltransformerinmultichanneleegsignals
AT lianjian identificationofemotionsevokedbymusicviaspatialtemporaltransformerinmultichanneleegsignals