Cargando…

Using the DiCoT framework for integrated multimodal analysis in mixed-reality training environments

Simulation-based training (SBT) programs are commonly employed by organizations to train individuals and teams for effective workplace cognitive and psychomotor skills in a broad range of applications. Distributed cognition has become a popular cognitive framework for the design and evaluation of th...

Descripción completa

Detalles Bibliográficos
Autores principales: Vatral, Caleb, Biswas, Gautam, Cohn, Clayton, Davalos, Eduardo, Mohammed, Naveeduddin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9353401/
https://www.ncbi.nlm.nih.gov/pubmed/35937140
http://dx.doi.org/10.3389/frai.2022.941825
_version_ 1784762858942234624
author Vatral, Caleb
Biswas, Gautam
Cohn, Clayton
Davalos, Eduardo
Mohammed, Naveeduddin
author_facet Vatral, Caleb
Biswas, Gautam
Cohn, Clayton
Davalos, Eduardo
Mohammed, Naveeduddin
author_sort Vatral, Caleb
collection PubMed
description Simulation-based training (SBT) programs are commonly employed by organizations to train individuals and teams for effective workplace cognitive and psychomotor skills in a broad range of applications. Distributed cognition has become a popular cognitive framework for the design and evaluation of these SBT environments, with structured methodologies such as Distributed Cognition for Teamwork (DiCoT) used for analysis. However, the analysis and evaluations generated by such distributed cognition frameworks require extensive domain-knowledge and manual coding and interpretation, and the analysis is primarily qualitative. In this work, we propose and develop the application of multimodal learning analysis techniques to SBT scenarios. Using these analysis methods, we can use the rich multimodal data collected in SBT environments to generate more automated interpretations of trainee performance that supplement and extend traditional DiCoT analysis. To demonstrate the use of these methods, we present a case study of nurses training in a mixed-reality manikin-based (MRMB) training environment. We show how the combined analysis of the video, speech, and eye-tracking data collected as the nurses train in the MRMB environment supports and enhances traditional qualitative DiCoT analysis. By applying such quantitative data-driven analysis methods, we can better analyze trainee activities online in SBT and MRMB environments. With continued development, these analysis methods could be used to provide targeted feedback to learners, a detailed review of training performance to the instructors, and data-driven evidence for improving the environment to simulation designers.
format Online
Article
Text
id pubmed-9353401
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-93534012022-08-06 Using the DiCoT framework for integrated multimodal analysis in mixed-reality training environments Vatral, Caleb Biswas, Gautam Cohn, Clayton Davalos, Eduardo Mohammed, Naveeduddin Front Artif Intell Artificial Intelligence Simulation-based training (SBT) programs are commonly employed by organizations to train individuals and teams for effective workplace cognitive and psychomotor skills in a broad range of applications. Distributed cognition has become a popular cognitive framework for the design and evaluation of these SBT environments, with structured methodologies such as Distributed Cognition for Teamwork (DiCoT) used for analysis. However, the analysis and evaluations generated by such distributed cognition frameworks require extensive domain-knowledge and manual coding and interpretation, and the analysis is primarily qualitative. In this work, we propose and develop the application of multimodal learning analysis techniques to SBT scenarios. Using these analysis methods, we can use the rich multimodal data collected in SBT environments to generate more automated interpretations of trainee performance that supplement and extend traditional DiCoT analysis. To demonstrate the use of these methods, we present a case study of nurses training in a mixed-reality manikin-based (MRMB) training environment. We show how the combined analysis of the video, speech, and eye-tracking data collected as the nurses train in the MRMB environment supports and enhances traditional qualitative DiCoT analysis. By applying such quantitative data-driven analysis methods, we can better analyze trainee activities online in SBT and MRMB environments. With continued development, these analysis methods could be used to provide targeted feedback to learners, a detailed review of training performance to the instructors, and data-driven evidence for improving the environment to simulation designers. Frontiers Media S.A. 2022-07-22 /pmc/articles/PMC9353401/ /pubmed/35937140 http://dx.doi.org/10.3389/frai.2022.941825 Text en Copyright © 2022 Vatral, Biswas, Cohn, Davalos and Mohammed. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Artificial Intelligence
Vatral, Caleb
Biswas, Gautam
Cohn, Clayton
Davalos, Eduardo
Mohammed, Naveeduddin
Using the DiCoT framework for integrated multimodal analysis in mixed-reality training environments
title Using the DiCoT framework for integrated multimodal analysis in mixed-reality training environments
title_full Using the DiCoT framework for integrated multimodal analysis in mixed-reality training environments
title_fullStr Using the DiCoT framework for integrated multimodal analysis in mixed-reality training environments
title_full_unstemmed Using the DiCoT framework for integrated multimodal analysis in mixed-reality training environments
title_short Using the DiCoT framework for integrated multimodal analysis in mixed-reality training environments
title_sort using the dicot framework for integrated multimodal analysis in mixed-reality training environments
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9353401/
https://www.ncbi.nlm.nih.gov/pubmed/35937140
http://dx.doi.org/10.3389/frai.2022.941825
work_keys_str_mv AT vatralcaleb usingthedicotframeworkforintegratedmultimodalanalysisinmixedrealitytrainingenvironments
AT biswasgautam usingthedicotframeworkforintegratedmultimodalanalysisinmixedrealitytrainingenvironments
AT cohnclayton usingthedicotframeworkforintegratedmultimodalanalysisinmixedrealitytrainingenvironments
AT davaloseduardo usingthedicotframeworkforintegratedmultimodalanalysisinmixedrealitytrainingenvironments
AT mohammednaveeduddin usingthedicotframeworkforintegratedmultimodalanalysisinmixedrealitytrainingenvironments