Cargando…
Driver’s Facial Expression Recognition in Real-Time for Safe Driving
In recent years, researchers of deep neural networks (DNNs)-based facial expression recognition (FER) have reported results showing that these approaches overcome the limitations of conventional machine learning-based FER approaches. However, as DNN-based FER approaches require an excessive amount o...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6308562/ https://www.ncbi.nlm.nih.gov/pubmed/30518132 http://dx.doi.org/10.3390/s18124270 |
_version_ | 1783383218320310272 |
---|---|
author | Jeong, Mira Ko, Byoung Chul |
author_facet | Jeong, Mira Ko, Byoung Chul |
author_sort | Jeong, Mira |
collection | PubMed |
description | In recent years, researchers of deep neural networks (DNNs)-based facial expression recognition (FER) have reported results showing that these approaches overcome the limitations of conventional machine learning-based FER approaches. However, as DNN-based FER approaches require an excessive amount of memory and incur high processing costs, their application in various fields is very limited and depends on the hardware specifications. In this paper, we propose a fast FER algorithm for monitoring a driver’s emotions that is capable of operating in low specification devices installed in vehicles. For this purpose, a hierarchical weighted random forest (WRF) classifier that is trained based on the similarity of sample data, in order to improve its accuracy, is employed. In the first step, facial landmarks are detected from input images and geometric features are extracted, considering the spatial position between landmarks. These feature vectors are then implemented in the proposed hierarchical WRF classifier to classify facial expressions. Our method was evaluated experimentally using three databases, extended Cohn-Kanade database (CK+), MMI and the Keimyung University Facial Expression of Drivers (KMU-FED) database, and its performance was compared with that of state-of-the-art methods. The results show that our proposed method yields a performance similar to that of deep learning FER methods as 92.6% for CK+ and 76.7% for MMI, with a significantly reduced processing cost approximately 3731 times less than that of the DNN method. These results confirm that the proposed method is optimized for real-time embedded applications having limited computing resources. |
format | Online Article Text |
id | pubmed-6308562 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2018 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-63085622019-01-04 Driver’s Facial Expression Recognition in Real-Time for Safe Driving Jeong, Mira Ko, Byoung Chul Sensors (Basel) Article In recent years, researchers of deep neural networks (DNNs)-based facial expression recognition (FER) have reported results showing that these approaches overcome the limitations of conventional machine learning-based FER approaches. However, as DNN-based FER approaches require an excessive amount of memory and incur high processing costs, their application in various fields is very limited and depends on the hardware specifications. In this paper, we propose a fast FER algorithm for monitoring a driver’s emotions that is capable of operating in low specification devices installed in vehicles. For this purpose, a hierarchical weighted random forest (WRF) classifier that is trained based on the similarity of sample data, in order to improve its accuracy, is employed. In the first step, facial landmarks are detected from input images and geometric features are extracted, considering the spatial position between landmarks. These feature vectors are then implemented in the proposed hierarchical WRF classifier to classify facial expressions. Our method was evaluated experimentally using three databases, extended Cohn-Kanade database (CK+), MMI and the Keimyung University Facial Expression of Drivers (KMU-FED) database, and its performance was compared with that of state-of-the-art methods. The results show that our proposed method yields a performance similar to that of deep learning FER methods as 92.6% for CK+ and 76.7% for MMI, with a significantly reduced processing cost approximately 3731 times less than that of the DNN method. These results confirm that the proposed method is optimized for real-time embedded applications having limited computing resources. MDPI 2018-12-04 /pmc/articles/PMC6308562/ /pubmed/30518132 http://dx.doi.org/10.3390/s18124270 Text en © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Jeong, Mira Ko, Byoung Chul Driver’s Facial Expression Recognition in Real-Time for Safe Driving |
title | Driver’s Facial Expression Recognition in Real-Time for Safe Driving |
title_full | Driver’s Facial Expression Recognition in Real-Time for Safe Driving |
title_fullStr | Driver’s Facial Expression Recognition in Real-Time for Safe Driving |
title_full_unstemmed | Driver’s Facial Expression Recognition in Real-Time for Safe Driving |
title_short | Driver’s Facial Expression Recognition in Real-Time for Safe Driving |
title_sort | driver’s facial expression recognition in real-time for safe driving |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6308562/ https://www.ncbi.nlm.nih.gov/pubmed/30518132 http://dx.doi.org/10.3390/s18124270 |
work_keys_str_mv | AT jeongmira driversfacialexpressionrecognitioninrealtimeforsafedriving AT kobyoungchul driversfacialexpressionrecognitioninrealtimeforsafedriving |