Cargando…
Image-Evoked Emotion Recognition for Hearing-Impaired Subjects with EEG Signals
In recent years, there has been a growing interest in the study of emotion recognition through electroencephalogram (EEG) signals. One particular group of interest are individuals with hearing impairments, who may have a bias towards certain types of information when communicating with those in thei...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10301379/ https://www.ncbi.nlm.nih.gov/pubmed/37420628 http://dx.doi.org/10.3390/s23125461 |
_version_ | 1785064797873635328 |
---|---|
author | Zhu, Mu Jin, Haonan Bai, Zhongli Li, Zhiwei Song, Yu |
author_facet | Zhu, Mu Jin, Haonan Bai, Zhongli Li, Zhiwei Song, Yu |
author_sort | Zhu, Mu |
collection | PubMed |
description | In recent years, there has been a growing interest in the study of emotion recognition through electroencephalogram (EEG) signals. One particular group of interest are individuals with hearing impairments, who may have a bias towards certain types of information when communicating with those in their environment. To address this, our study collected EEG signals from both hearing-impaired and non-hearing-impaired subjects while they viewed pictures of emotional faces for emotion recognition. Four kinds of feature matrices, symmetry difference, and symmetry quotient based on original signal and differential entropy (DE) were constructed, respectively, to extract the spatial domain information. The multi-axis self-attention classification model was proposed, which consists of local attention and global attention, combining the attention model with convolution through a novel architectural element for feature classification. Three-classification (positive, neutral, negative) and five-classification (happy, neutral, sad, angry, fearful) tasks of emotion recognition were carried out. The experimental results show that the proposed method is superior to the original feature method, and the multi-feature fusion achieved a good effect in both hearing-impaired and non-hearing-impaired subjects. The average classification accuracy for hearing-impaired subjects and non-hearing-impaired subjects was 70.2% (three-classification) and 50.15% (five-classification), and 72.05% (three-classification) and 51.53% (five-classification), respectively. In addition, by exploring the brain topography of different emotions, we found that the discriminative brain regions of the hearing-impaired subjects were also distributed in the parietal lobe, unlike those of the non-hearing-impaired subjects. |
format | Online Article Text |
id | pubmed-10301379 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-103013792023-06-29 Image-Evoked Emotion Recognition for Hearing-Impaired Subjects with EEG Signals Zhu, Mu Jin, Haonan Bai, Zhongli Li, Zhiwei Song, Yu Sensors (Basel) Article In recent years, there has been a growing interest in the study of emotion recognition through electroencephalogram (EEG) signals. One particular group of interest are individuals with hearing impairments, who may have a bias towards certain types of information when communicating with those in their environment. To address this, our study collected EEG signals from both hearing-impaired and non-hearing-impaired subjects while they viewed pictures of emotional faces for emotion recognition. Four kinds of feature matrices, symmetry difference, and symmetry quotient based on original signal and differential entropy (DE) were constructed, respectively, to extract the spatial domain information. The multi-axis self-attention classification model was proposed, which consists of local attention and global attention, combining the attention model with convolution through a novel architectural element for feature classification. Three-classification (positive, neutral, negative) and five-classification (happy, neutral, sad, angry, fearful) tasks of emotion recognition were carried out. The experimental results show that the proposed method is superior to the original feature method, and the multi-feature fusion achieved a good effect in both hearing-impaired and non-hearing-impaired subjects. The average classification accuracy for hearing-impaired subjects and non-hearing-impaired subjects was 70.2% (three-classification) and 50.15% (five-classification), and 72.05% (three-classification) and 51.53% (five-classification), respectively. In addition, by exploring the brain topography of different emotions, we found that the discriminative brain regions of the hearing-impaired subjects were also distributed in the parietal lobe, unlike those of the non-hearing-impaired subjects. MDPI 2023-06-09 /pmc/articles/PMC10301379/ /pubmed/37420628 http://dx.doi.org/10.3390/s23125461 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Zhu, Mu Jin, Haonan Bai, Zhongli Li, Zhiwei Song, Yu Image-Evoked Emotion Recognition for Hearing-Impaired Subjects with EEG Signals |
title | Image-Evoked Emotion Recognition for Hearing-Impaired Subjects with EEG Signals |
title_full | Image-Evoked Emotion Recognition for Hearing-Impaired Subjects with EEG Signals |
title_fullStr | Image-Evoked Emotion Recognition for Hearing-Impaired Subjects with EEG Signals |
title_full_unstemmed | Image-Evoked Emotion Recognition for Hearing-Impaired Subjects with EEG Signals |
title_short | Image-Evoked Emotion Recognition for Hearing-Impaired Subjects with EEG Signals |
title_sort | image-evoked emotion recognition for hearing-impaired subjects with eeg signals |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10301379/ https://www.ncbi.nlm.nih.gov/pubmed/37420628 http://dx.doi.org/10.3390/s23125461 |
work_keys_str_mv | AT zhumu imageevokedemotionrecognitionforhearingimpairedsubjectswitheegsignals AT jinhaonan imageevokedemotionrecognitionforhearingimpairedsubjectswitheegsignals AT baizhongli imageevokedemotionrecognitionforhearingimpairedsubjectswitheegsignals AT lizhiwei imageevokedemotionrecognitionforhearingimpairedsubjectswitheegsignals AT songyu imageevokedemotionrecognitionforhearingimpairedsubjectswitheegsignals |