Cargando…
Evaluating Ensemble Learning Methods for Multi-Modal Emotion Recognition Using Sensor Data Fusion
Automatic recognition of human emotions is not a trivial process. There are many factors affecting emotions internally and externally. Expressing emotions could also be performed in many ways such as text, speech, body gestures or even physiologically by physiological body responses. Emotion detecti...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9371233/ https://www.ncbi.nlm.nih.gov/pubmed/35957167 http://dx.doi.org/10.3390/s22155611 |
_version_ | 1784767076293935104 |
---|---|
author | Younis, Eman M. G. Zaki, Someya Mohsen Kanjo, Eiman Houssein, Essam H. |
author_facet | Younis, Eman M. G. Zaki, Someya Mohsen Kanjo, Eiman Houssein, Essam H. |
author_sort | Younis, Eman M. G. |
collection | PubMed |
description | Automatic recognition of human emotions is not a trivial process. There are many factors affecting emotions internally and externally. Expressing emotions could also be performed in many ways such as text, speech, body gestures or even physiologically by physiological body responses. Emotion detection enables many applications such as adaptive user interfaces, interactive games, and human robot interaction and many more. The availability of advanced technologies such as mobiles, sensors, and data analytics tools led to the ability to collect data from various sources, which enabled researchers to predict human emotions accurately. Most current research uses them in the lab experiments for data collection. In this work, we use direct and real time sensor data to construct a subject-independent (generic) multi-modal emotion prediction model. This research integrates both on-body physiological markers, surrounding sensory data, and emotion measurements to achieve the following goals: (1) Collecting a multi-modal data set including environmental, body responses, and emotions. (2) Creating subject-independent Predictive models of emotional states based on fusing environmental and physiological variables. (3) Assessing ensemble learning methods and comparing their performance for creating a generic subject-independent model for emotion recognition with high accuracy and comparing the results with previous similar research. To achieve that, we conducted a real-world study “in the wild” with physiological and mobile sensors. Collecting the data-set is coming from participants walking around Minia university campus to create accurate predictive models. Various ensemble learning models (Bagging, Boosting, and Stacking) have been used, combining the following base algorithms (K Nearest Neighbor KNN, Decision Tree DT, Random Forest RF, and Support Vector Machine SVM) as base learners and DT as a meta-classifier. The results showed that, the ensemble stacking learner technique gave the best accuracy of 98.2% compared with other variants of ensemble learning methods. On the contrary, bagging and boosting methods gave (96.4%) and (96.6%) accuracy levels respectively. |
format | Online Article Text |
id | pubmed-9371233 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-93712332022-08-12 Evaluating Ensemble Learning Methods for Multi-Modal Emotion Recognition Using Sensor Data Fusion Younis, Eman M. G. Zaki, Someya Mohsen Kanjo, Eiman Houssein, Essam H. Sensors (Basel) Article Automatic recognition of human emotions is not a trivial process. There are many factors affecting emotions internally and externally. Expressing emotions could also be performed in many ways such as text, speech, body gestures or even physiologically by physiological body responses. Emotion detection enables many applications such as adaptive user interfaces, interactive games, and human robot interaction and many more. The availability of advanced technologies such as mobiles, sensors, and data analytics tools led to the ability to collect data from various sources, which enabled researchers to predict human emotions accurately. Most current research uses them in the lab experiments for data collection. In this work, we use direct and real time sensor data to construct a subject-independent (generic) multi-modal emotion prediction model. This research integrates both on-body physiological markers, surrounding sensory data, and emotion measurements to achieve the following goals: (1) Collecting a multi-modal data set including environmental, body responses, and emotions. (2) Creating subject-independent Predictive models of emotional states based on fusing environmental and physiological variables. (3) Assessing ensemble learning methods and comparing their performance for creating a generic subject-independent model for emotion recognition with high accuracy and comparing the results with previous similar research. To achieve that, we conducted a real-world study “in the wild” with physiological and mobile sensors. Collecting the data-set is coming from participants walking around Minia university campus to create accurate predictive models. Various ensemble learning models (Bagging, Boosting, and Stacking) have been used, combining the following base algorithms (K Nearest Neighbor KNN, Decision Tree DT, Random Forest RF, and Support Vector Machine SVM) as base learners and DT as a meta-classifier. The results showed that, the ensemble stacking learner technique gave the best accuracy of 98.2% compared with other variants of ensemble learning methods. On the contrary, bagging and boosting methods gave (96.4%) and (96.6%) accuracy levels respectively. MDPI 2022-07-27 /pmc/articles/PMC9371233/ /pubmed/35957167 http://dx.doi.org/10.3390/s22155611 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Younis, Eman M. G. Zaki, Someya Mohsen Kanjo, Eiman Houssein, Essam H. Evaluating Ensemble Learning Methods for Multi-Modal Emotion Recognition Using Sensor Data Fusion |
title | Evaluating Ensemble Learning Methods for Multi-Modal Emotion Recognition Using Sensor Data Fusion |
title_full | Evaluating Ensemble Learning Methods for Multi-Modal Emotion Recognition Using Sensor Data Fusion |
title_fullStr | Evaluating Ensemble Learning Methods for Multi-Modal Emotion Recognition Using Sensor Data Fusion |
title_full_unstemmed | Evaluating Ensemble Learning Methods for Multi-Modal Emotion Recognition Using Sensor Data Fusion |
title_short | Evaluating Ensemble Learning Methods for Multi-Modal Emotion Recognition Using Sensor Data Fusion |
title_sort | evaluating ensemble learning methods for multi-modal emotion recognition using sensor data fusion |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9371233/ https://www.ncbi.nlm.nih.gov/pubmed/35957167 http://dx.doi.org/10.3390/s22155611 |
work_keys_str_mv | AT younisemanmg evaluatingensemblelearningmethodsformultimodalemotionrecognitionusingsensordatafusion AT zakisomeyamohsen evaluatingensemblelearningmethodsformultimodalemotionrecognitionusingsensordatafusion AT kanjoeiman evaluatingensemblelearningmethodsformultimodalemotionrecognitionusingsensordatafusion AT housseinessamh evaluatingensemblelearningmethodsformultimodalemotionrecognitionusingsensordatafusion |