Cargando…
Drivers’ Comprehensive Emotion Recognition Based on HAM
Negative emotions of drivers may lead to some dangerous driving behaviors, which in turn lead to serious traffic accidents. However, most of the current studies on driver emotions use a single modality, such as EEG, eye trackers, and driving data. In complex situations, a single modality may not be...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10574905/ https://www.ncbi.nlm.nih.gov/pubmed/37837124 http://dx.doi.org/10.3390/s23198293 |
_version_ | 1785120798327963648 |
---|---|
author | Zhou, Dongmei Cheng, Yongjian Wen, Luhan Luo, Hao Liu, Ying |
author_facet | Zhou, Dongmei Cheng, Yongjian Wen, Luhan Luo, Hao Liu, Ying |
author_sort | Zhou, Dongmei |
collection | PubMed |
description | Negative emotions of drivers may lead to some dangerous driving behaviors, which in turn lead to serious traffic accidents. However, most of the current studies on driver emotions use a single modality, such as EEG, eye trackers, and driving data. In complex situations, a single modality may not be able to fully consider a driver’s complete emotional characteristics and provides poor robustness. In recent years, some studies have used multimodal thinking to monitor single emotions such as driver fatigue and anger, but in actual driving environments, negative emotions such as sadness, anger, fear, and fatigue all have a significant impact on driving safety. However, there are very few research cases using multimodal data to accurately predict drivers’ comprehensive emotions. Therefore, based on the multi-modal idea, this paper aims to improve drivers’ comprehensive emotion recognition. By combining the three modalities of a driver’s voice, facial image, and video sequence, the six classification tasks of drivers’ emotions are performed as follows: sadness, anger, fear, fatigue, happiness, and emotional neutrality. In order to accurately identify drivers’ negative emotions to improve driving safety, this paper proposes a multi-modal fusion framework based on the CNN + Bi-LSTM + HAM to identify driver emotions. The framework fuses feature vectors of driver audio, facial expressions, and video sequences for comprehensive driver emotion recognition. Experiments have proved the effectiveness of the multi-modal data proposed in this paper for driver emotion recognition, and its recognition accuracy has reached 85.52%. At the same time, the validity of this method is verified by comparing experiments and evaluation indicators such as accuracy and F1 score. |
format | Online Article Text |
id | pubmed-10574905 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-105749052023-10-14 Drivers’ Comprehensive Emotion Recognition Based on HAM Zhou, Dongmei Cheng, Yongjian Wen, Luhan Luo, Hao Liu, Ying Sensors (Basel) Article Negative emotions of drivers may lead to some dangerous driving behaviors, which in turn lead to serious traffic accidents. However, most of the current studies on driver emotions use a single modality, such as EEG, eye trackers, and driving data. In complex situations, a single modality may not be able to fully consider a driver’s complete emotional characteristics and provides poor robustness. In recent years, some studies have used multimodal thinking to monitor single emotions such as driver fatigue and anger, but in actual driving environments, negative emotions such as sadness, anger, fear, and fatigue all have a significant impact on driving safety. However, there are very few research cases using multimodal data to accurately predict drivers’ comprehensive emotions. Therefore, based on the multi-modal idea, this paper aims to improve drivers’ comprehensive emotion recognition. By combining the three modalities of a driver’s voice, facial image, and video sequence, the six classification tasks of drivers’ emotions are performed as follows: sadness, anger, fear, fatigue, happiness, and emotional neutrality. In order to accurately identify drivers’ negative emotions to improve driving safety, this paper proposes a multi-modal fusion framework based on the CNN + Bi-LSTM + HAM to identify driver emotions. The framework fuses feature vectors of driver audio, facial expressions, and video sequences for comprehensive driver emotion recognition. Experiments have proved the effectiveness of the multi-modal data proposed in this paper for driver emotion recognition, and its recognition accuracy has reached 85.52%. At the same time, the validity of this method is verified by comparing experiments and evaluation indicators such as accuracy and F1 score. MDPI 2023-10-07 /pmc/articles/PMC10574905/ /pubmed/37837124 http://dx.doi.org/10.3390/s23198293 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Zhou, Dongmei Cheng, Yongjian Wen, Luhan Luo, Hao Liu, Ying Drivers’ Comprehensive Emotion Recognition Based on HAM |
title | Drivers’ Comprehensive Emotion Recognition Based on HAM |
title_full | Drivers’ Comprehensive Emotion Recognition Based on HAM |
title_fullStr | Drivers’ Comprehensive Emotion Recognition Based on HAM |
title_full_unstemmed | Drivers’ Comprehensive Emotion Recognition Based on HAM |
title_short | Drivers’ Comprehensive Emotion Recognition Based on HAM |
title_sort | drivers’ comprehensive emotion recognition based on ham |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10574905/ https://www.ncbi.nlm.nih.gov/pubmed/37837124 http://dx.doi.org/10.3390/s23198293 |
work_keys_str_mv | AT zhoudongmei driverscomprehensiveemotionrecognitionbasedonham AT chengyongjian driverscomprehensiveemotionrecognitionbasedonham AT wenluhan driverscomprehensiveemotionrecognitionbasedonham AT luohao driverscomprehensiveemotionrecognitionbasedonham AT liuying driverscomprehensiveemotionrecognitionbasedonham |