Cargando…
The impact of automated writing evaluation (AWE) on EFL learners’ peer and self-editing
Automated Writing Evaluation (AWE) is one of the machine techniques used for assessing learners’ writing. Recently, this technique has been widely implemented for improving learners’ editing strategies. Several studies have been conducted to compare self-editing with peer editing. However, only a fe...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer US
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9684760/ https://www.ncbi.nlm.nih.gov/pubmed/36465424 http://dx.doi.org/10.1007/s10639-022-11458-x |
_version_ | 1784835363000287232 |
---|---|
author | Al-Inbari, Fatima Abdullah Yahya Al-Wasy, Baleigh Qassim Mohammed |
author_facet | Al-Inbari, Fatima Abdullah Yahya Al-Wasy, Baleigh Qassim Mohammed |
author_sort | Al-Inbari, Fatima Abdullah Yahya |
collection | PubMed |
description | Automated Writing Evaluation (AWE) is one of the machine techniques used for assessing learners’ writing. Recently, this technique has been widely implemented for improving learners’ editing strategies. Several studies have been conducted to compare self-editing with peer editing. However, only a few studies have compared automated peer and self-editing. To fill this research gap, the present study implements AWE software, WRITER, for peer and self-editing. For this purpose, a pre-post quasi-experimental research design with convenience sampling is done for automated and non-automated editing of cause-effect essay writing. Arab, EFL learners of English, 44 in number, have been assigned to four groups: two peer and self-editing control groups and two automated peer and self-editing experimental groups. There is a triangulation of the quasi-experimental design with qualitative data from retrospective notes and questionnaire responses of the participants during and after automated editing. The quantitative data have been analyzed using non-parametric tests. The qualitative data have undergone thematic and content analysis. The results reveal that the AWE software has positively affected both the peer and self-editing experimental groups. However, no significant difference is detected between them. The analysis of the qualitative data reflects participants’ positive evaluation of both the software and the automated peer and self-editing experience. |
format | Online Article Text |
id | pubmed-9684760 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Springer US |
record_format | MEDLINE/PubMed |
spelling | pubmed-96847602022-11-28 The impact of automated writing evaluation (AWE) on EFL learners’ peer and self-editing Al-Inbari, Fatima Abdullah Yahya Al-Wasy, Baleigh Qassim Mohammed Educ Inf Technol (Dordr) Article Automated Writing Evaluation (AWE) is one of the machine techniques used for assessing learners’ writing. Recently, this technique has been widely implemented for improving learners’ editing strategies. Several studies have been conducted to compare self-editing with peer editing. However, only a few studies have compared automated peer and self-editing. To fill this research gap, the present study implements AWE software, WRITER, for peer and self-editing. For this purpose, a pre-post quasi-experimental research design with convenience sampling is done for automated and non-automated editing of cause-effect essay writing. Arab, EFL learners of English, 44 in number, have been assigned to four groups: two peer and self-editing control groups and two automated peer and self-editing experimental groups. There is a triangulation of the quasi-experimental design with qualitative data from retrospective notes and questionnaire responses of the participants during and after automated editing. The quantitative data have been analyzed using non-parametric tests. The qualitative data have undergone thematic and content analysis. The results reveal that the AWE software has positively affected both the peer and self-editing experimental groups. However, no significant difference is detected between them. The analysis of the qualitative data reflects participants’ positive evaluation of both the software and the automated peer and self-editing experience. Springer US 2022-11-21 2023 /pmc/articles/PMC9684760/ /pubmed/36465424 http://dx.doi.org/10.1007/s10639-022-11458-x Text en © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022, Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic. |
spellingShingle | Article Al-Inbari, Fatima Abdullah Yahya Al-Wasy, Baleigh Qassim Mohammed The impact of automated writing evaluation (AWE) on EFL learners’ peer and self-editing |
title | The impact of automated writing evaluation (AWE) on EFL learners’ peer and self-editing |
title_full | The impact of automated writing evaluation (AWE) on EFL learners’ peer and self-editing |
title_fullStr | The impact of automated writing evaluation (AWE) on EFL learners’ peer and self-editing |
title_full_unstemmed | The impact of automated writing evaluation (AWE) on EFL learners’ peer and self-editing |
title_short | The impact of automated writing evaluation (AWE) on EFL learners’ peer and self-editing |
title_sort | impact of automated writing evaluation (awe) on efl learners’ peer and self-editing |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9684760/ https://www.ncbi.nlm.nih.gov/pubmed/36465424 http://dx.doi.org/10.1007/s10639-022-11458-x |
work_keys_str_mv | AT alinbarifatimaabdullahyahya theimpactofautomatedwritingevaluationaweonefllearnerspeerandselfediting AT alwasybaleighqassimmohammed theimpactofautomatedwritingevaluationaweonefllearnerspeerandselfediting AT alinbarifatimaabdullahyahya impactofautomatedwritingevaluationaweonefllearnerspeerandselfediting AT alwasybaleighqassimmohammed impactofautomatedwritingevaluationaweonefllearnerspeerandselfediting |