Cargando…

Exploring collaborative caption editing to augment video-based learning

Captions play a major role in making educational videos accessible to all and are known to benefit a wide range of learners. However, many educational videos either do not have captions or have inaccurate captions. Prior work has shown the benefits of using crowdsourcing to obtain accurate captions...

Descripción completa

Detalles Bibliográficos
Autores principales: Bhavya, Bhavya, Chen, Si, Zhang, Zhilin, Li, Wenting, Zhai, Chengxiang, Angrave, Lawrence, Huang, Yun
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9285185/
https://www.ncbi.nlm.nih.gov/pubmed/35855355
http://dx.doi.org/10.1007/s11423-022-10137-5
_version_ 1784747717998673920
author Bhavya, Bhavya
Chen, Si
Zhang, Zhilin
Li, Wenting
Zhai, Chengxiang
Angrave, Lawrence
Huang, Yun
author_facet Bhavya, Bhavya
Chen, Si
Zhang, Zhilin
Li, Wenting
Zhai, Chengxiang
Angrave, Lawrence
Huang, Yun
author_sort Bhavya, Bhavya
collection PubMed
description Captions play a major role in making educational videos accessible to all and are known to benefit a wide range of learners. However, many educational videos either do not have captions or have inaccurate captions. Prior work has shown the benefits of using crowdsourcing to obtain accurate captions in a cost-efficient way, though there is a lack of understanding of how learners edit captions of educational videos either individually or collaboratively. In this work, we conducted a user study where 58 learners (in a course of 387 learners) participated in the editing of captions in 89 lecture videos that were generated by Automatic Speech Recognition (ASR) technologies. For each video, different learners conducted two rounds of editing. Based on editing logs, we created a taxonomy of errors in educational video captions (e.g., Discipline-Specific, General, Equations). From the interviews, we identified individual and collaborative error editing strategies. We then further demonstrated the feasibility of applying machine learning models to assist learners in editing. Our work provides practical implications for advancing video-based learning and for educational video caption editing.
format Online
Article
Text
id pubmed-9285185
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Springer US
record_format MEDLINE/PubMed
spelling pubmed-92851852022-07-15 Exploring collaborative caption editing to augment video-based learning Bhavya, Bhavya Chen, Si Zhang, Zhilin Li, Wenting Zhai, Chengxiang Angrave, Lawrence Huang, Yun Educ Technol Res Dev Development Article Captions play a major role in making educational videos accessible to all and are known to benefit a wide range of learners. However, many educational videos either do not have captions or have inaccurate captions. Prior work has shown the benefits of using crowdsourcing to obtain accurate captions in a cost-efficient way, though there is a lack of understanding of how learners edit captions of educational videos either individually or collaboratively. In this work, we conducted a user study where 58 learners (in a course of 387 learners) participated in the editing of captions in 89 lecture videos that were generated by Automatic Speech Recognition (ASR) technologies. For each video, different learners conducted two rounds of editing. Based on editing logs, we created a taxonomy of errors in educational video captions (e.g., Discipline-Specific, General, Equations). From the interviews, we identified individual and collaborative error editing strategies. We then further demonstrated the feasibility of applying machine learning models to assist learners in editing. Our work provides practical implications for advancing video-based learning and for educational video caption editing. Springer US 2022-07-15 2022 /pmc/articles/PMC9285185/ /pubmed/35855355 http://dx.doi.org/10.1007/s11423-022-10137-5 Text en © Association for Educational Communications and Technology 2022 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Development Article
Bhavya, Bhavya
Chen, Si
Zhang, Zhilin
Li, Wenting
Zhai, Chengxiang
Angrave, Lawrence
Huang, Yun
Exploring collaborative caption editing to augment video-based learning
title Exploring collaborative caption editing to augment video-based learning
title_full Exploring collaborative caption editing to augment video-based learning
title_fullStr Exploring collaborative caption editing to augment video-based learning
title_full_unstemmed Exploring collaborative caption editing to augment video-based learning
title_short Exploring collaborative caption editing to augment video-based learning
title_sort exploring collaborative caption editing to augment video-based learning
topic Development Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9285185/
https://www.ncbi.nlm.nih.gov/pubmed/35855355
http://dx.doi.org/10.1007/s11423-022-10137-5
work_keys_str_mv AT bhavyabhavya exploringcollaborativecaptioneditingtoaugmentvideobasedlearning
AT chensi exploringcollaborativecaptioneditingtoaugmentvideobasedlearning
AT zhangzhilin exploringcollaborativecaptioneditingtoaugmentvideobasedlearning
AT liwenting exploringcollaborativecaptioneditingtoaugmentvideobasedlearning
AT zhaichengxiang exploringcollaborativecaptioneditingtoaugmentvideobasedlearning
AT angravelawrence exploringcollaborativecaptioneditingtoaugmentvideobasedlearning
AT huangyun exploringcollaborativecaptioneditingtoaugmentvideobasedlearning