Cargando…
Deep Neural Network Approach for Pose, Illumination, and Occlusion Invariant Driver Emotion Detection
Monitoring drivers’ emotions is the key aspect of designing advanced driver assistance systems (ADAS) in intelligent vehicles. To ensure safety and track the possibility of vehicles’ road accidents, emotional monitoring will play a key role in justifying the mental status of the driver while driving...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8871818/ https://www.ncbi.nlm.nih.gov/pubmed/35206540 http://dx.doi.org/10.3390/ijerph19042352 |
_version_ | 1784657086612766720 |
---|---|
author | Sukhavasi, Susrutha Babu Sukhavasi, Suparshya Babu Elleithy, Khaled El-Sayed, Ahmed Elleithy, Abdelrahman |
author_facet | Sukhavasi, Susrutha Babu Sukhavasi, Suparshya Babu Elleithy, Khaled El-Sayed, Ahmed Elleithy, Abdelrahman |
author_sort | Sukhavasi, Susrutha Babu |
collection | PubMed |
description | Monitoring drivers’ emotions is the key aspect of designing advanced driver assistance systems (ADAS) in intelligent vehicles. To ensure safety and track the possibility of vehicles’ road accidents, emotional monitoring will play a key role in justifying the mental status of the driver while driving the vehicle. However, the pose variations, illumination conditions, and occlusions are the factors that affect the detection of driver emotions from proper monitoring. To overcome these challenges, two novel approaches using machine learning methods and deep neural networks are proposed to monitor various drivers’ expressions in different pose variations, illuminations, and occlusions. We obtained the remarkable accuracy of 93.41%, 83.68%, 98.47%, and 98.18% for CK+, FER 2013, KDEF, and KMU-FED datasets, respectively, for the first approach and improved accuracy of 96.15%, 84.58%, 99.18%, and 99.09% for CK+, FER 2013, KDEF, and KMU-FED datasets respectively in the second approach, compared to the existing state-of-the-art methods. |
format | Online Article Text |
id | pubmed-8871818 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-88718182022-02-25 Deep Neural Network Approach for Pose, Illumination, and Occlusion Invariant Driver Emotion Detection Sukhavasi, Susrutha Babu Sukhavasi, Suparshya Babu Elleithy, Khaled El-Sayed, Ahmed Elleithy, Abdelrahman Int J Environ Res Public Health Article Monitoring drivers’ emotions is the key aspect of designing advanced driver assistance systems (ADAS) in intelligent vehicles. To ensure safety and track the possibility of vehicles’ road accidents, emotional monitoring will play a key role in justifying the mental status of the driver while driving the vehicle. However, the pose variations, illumination conditions, and occlusions are the factors that affect the detection of driver emotions from proper monitoring. To overcome these challenges, two novel approaches using machine learning methods and deep neural networks are proposed to monitor various drivers’ expressions in different pose variations, illuminations, and occlusions. We obtained the remarkable accuracy of 93.41%, 83.68%, 98.47%, and 98.18% for CK+, FER 2013, KDEF, and KMU-FED datasets, respectively, for the first approach and improved accuracy of 96.15%, 84.58%, 99.18%, and 99.09% for CK+, FER 2013, KDEF, and KMU-FED datasets respectively in the second approach, compared to the existing state-of-the-art methods. MDPI 2022-02-18 /pmc/articles/PMC8871818/ /pubmed/35206540 http://dx.doi.org/10.3390/ijerph19042352 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Sukhavasi, Susrutha Babu Sukhavasi, Suparshya Babu Elleithy, Khaled El-Sayed, Ahmed Elleithy, Abdelrahman Deep Neural Network Approach for Pose, Illumination, and Occlusion Invariant Driver Emotion Detection |
title | Deep Neural Network Approach for Pose, Illumination, and Occlusion Invariant Driver Emotion Detection |
title_full | Deep Neural Network Approach for Pose, Illumination, and Occlusion Invariant Driver Emotion Detection |
title_fullStr | Deep Neural Network Approach for Pose, Illumination, and Occlusion Invariant Driver Emotion Detection |
title_full_unstemmed | Deep Neural Network Approach for Pose, Illumination, and Occlusion Invariant Driver Emotion Detection |
title_short | Deep Neural Network Approach for Pose, Illumination, and Occlusion Invariant Driver Emotion Detection |
title_sort | deep neural network approach for pose, illumination, and occlusion invariant driver emotion detection |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8871818/ https://www.ncbi.nlm.nih.gov/pubmed/35206540 http://dx.doi.org/10.3390/ijerph19042352 |
work_keys_str_mv | AT sukhavasisusruthababu deepneuralnetworkapproachforposeilluminationandocclusioninvariantdriveremotiondetection AT sukhavasisuparshyababu deepneuralnetworkapproachforposeilluminationandocclusioninvariantdriveremotiondetection AT elleithykhaled deepneuralnetworkapproachforposeilluminationandocclusioninvariantdriveremotiondetection AT elsayedahmed deepneuralnetworkapproachforposeilluminationandocclusioninvariantdriveremotiondetection AT elleithyabdelrahman deepneuralnetworkapproachforposeilluminationandocclusioninvariantdriveremotiondetection |