Cargando…
Development and application of a multi-modal task analysis to support intelligent tutoring of complex skills
BACKGROUND: Contemporary work in the design and development of intelligent training systems employs task analysis (TA) methods for gathering knowledge that is subsequently encoded into task models. These task models form the basis of intelligent interpretation of student performance within education...
Autores principales: | , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer International Publishing
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6310465/ https://www.ncbi.nlm.nih.gov/pubmed/30631704 http://dx.doi.org/10.1186/s40594-018-0108-5 |
Sumario: | BACKGROUND: Contemporary work in the design and development of intelligent training systems employs task analysis (TA) methods for gathering knowledge that is subsequently encoded into task models. These task models form the basis of intelligent interpretation of student performance within education and training systems. Also referred to as expert models, they represent the optimal way(s) of performing a training task. Within Intelligent Tutoring Systems (ITSs), real-time comparison of trainee task performance against the task model drives automated assessment and interactive support (such as immediate feedback) functionality. However, previous task analysis (TA) methods, including various forms of cognitive task analysis (CTA), may not be sufficient to support identification of the detailed design specifications required for the development of an ITS for a complex training task incorporating multiple underlying skill components, as well as multi-modal information presentation, assessment, and feedback modalities. Our current work seeks to develop an ITS for training Robotic Assisted Laparoscopic Surgery (RALS), a complex task domain that requires a coordinated utilization of integrated cognitive, psychomotor, and perceptual skills. RESULTS: In this paper, we describe a methodological extension to CTA, referred to as multi-modal task analysis (MMTA) that elicits and captures the nuances of integrated and isolated cognitive, psychomotor, and perceptual skill modalities as they apply to training and performing complex operational tasks. In the current case, we illustrate the application of the MMTA method described here to RALS training tasks. The products of the analysis are quantitatively summarized, and observations from a preliminary qualitative validation are reported. CONCLUSIONS: We find that iterative use of the described MMTA method leads to sufficiently complete and robust task models to support encoding of cognitive, psychomotor, and perceptual skills requisite to training and performance of complex skills within ITS task models. |
---|