Cargando…
Augmenting Social Science Research with Multimodal Data Collection: The EZ-MMLA Toolkit
While the majority of social scientists still rely on traditional research instruments (e.g., surveys, self-reports, qualitative observations), multimodal sensing is becoming an emerging methodology for capturing human behaviors. Sensing technology has the potential to complement and enrich traditio...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8780387/ https://www.ncbi.nlm.nih.gov/pubmed/35062528 http://dx.doi.org/10.3390/s22020568 |
_version_ | 1784637825679884288 |
---|---|
author | Schneider, Bertrand Hassan, Javaria Sung, Gahyun |
author_facet | Schneider, Bertrand Hassan, Javaria Sung, Gahyun |
author_sort | Schneider, Bertrand |
collection | PubMed |
description | While the majority of social scientists still rely on traditional research instruments (e.g., surveys, self-reports, qualitative observations), multimodal sensing is becoming an emerging methodology for capturing human behaviors. Sensing technology has the potential to complement and enrich traditional measures by providing high frequency data on people’s behavior, cognition and affects. However, there is currently no easy-to-use toolkit for recording multimodal data streams. Existing methodologies rely on the use of physical sensors and custom-written code for accessing sensor data. In this paper, we present the EZ-MMLA toolkit. This toolkit was implemented as a website and provides easy access to multimodal data collection algorithms. One can collect a variety of data modalities: data on users’ attention (eye-tracking), physiological states (heart rate), body posture (skeletal data), gestures (from hand motion), emotions (from facial expressions and speech) and lower-level computer vision algorithms (e.g., fiducial/color tracking). This toolkit can run from any browser and does not require dedicated hardware or programming experience. We compare this toolkit with traditional methods and describe a case study where the EZ-MMLA toolkit was used by aspiring educational researchers in a classroom context. We conclude by discussing future work and other applications of this toolkit, potential limitations and implications. |
format | Online Article Text |
id | pubmed-8780387 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-87803872022-01-22 Augmenting Social Science Research with Multimodal Data Collection: The EZ-MMLA Toolkit Schneider, Bertrand Hassan, Javaria Sung, Gahyun Sensors (Basel) Article While the majority of social scientists still rely on traditional research instruments (e.g., surveys, self-reports, qualitative observations), multimodal sensing is becoming an emerging methodology for capturing human behaviors. Sensing technology has the potential to complement and enrich traditional measures by providing high frequency data on people’s behavior, cognition and affects. However, there is currently no easy-to-use toolkit for recording multimodal data streams. Existing methodologies rely on the use of physical sensors and custom-written code for accessing sensor data. In this paper, we present the EZ-MMLA toolkit. This toolkit was implemented as a website and provides easy access to multimodal data collection algorithms. One can collect a variety of data modalities: data on users’ attention (eye-tracking), physiological states (heart rate), body posture (skeletal data), gestures (from hand motion), emotions (from facial expressions and speech) and lower-level computer vision algorithms (e.g., fiducial/color tracking). This toolkit can run from any browser and does not require dedicated hardware or programming experience. We compare this toolkit with traditional methods and describe a case study where the EZ-MMLA toolkit was used by aspiring educational researchers in a classroom context. We conclude by discussing future work and other applications of this toolkit, potential limitations and implications. MDPI 2022-01-12 /pmc/articles/PMC8780387/ /pubmed/35062528 http://dx.doi.org/10.3390/s22020568 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Schneider, Bertrand Hassan, Javaria Sung, Gahyun Augmenting Social Science Research with Multimodal Data Collection: The EZ-MMLA Toolkit |
title | Augmenting Social Science Research with Multimodal Data Collection: The EZ-MMLA Toolkit |
title_full | Augmenting Social Science Research with Multimodal Data Collection: The EZ-MMLA Toolkit |
title_fullStr | Augmenting Social Science Research with Multimodal Data Collection: The EZ-MMLA Toolkit |
title_full_unstemmed | Augmenting Social Science Research with Multimodal Data Collection: The EZ-MMLA Toolkit |
title_short | Augmenting Social Science Research with Multimodal Data Collection: The EZ-MMLA Toolkit |
title_sort | augmenting social science research with multimodal data collection: the ez-mmla toolkit |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8780387/ https://www.ncbi.nlm.nih.gov/pubmed/35062528 http://dx.doi.org/10.3390/s22020568 |
work_keys_str_mv | AT schneiderbertrand augmentingsocialscienceresearchwithmultimodaldatacollectiontheezmmlatoolkit AT hassanjavaria augmentingsocialscienceresearchwithmultimodaldatacollectiontheezmmlatoolkit AT sunggahyun augmentingsocialscienceresearchwithmultimodaldatacollectiontheezmmlatoolkit |