Cargando…
Comparing product quality between translation and paraphrasing: Using NLP-assisted evaluation frameworks
Translation and paraphrasing, as typical forms in second language (L2) communication, have been considered effective learning methods in second language acquisition (SLA). While many studies have investigated their similarities and differences in a process-oriented approach, little attention has bee...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9732433/ https://www.ncbi.nlm.nih.gov/pubmed/36506993 http://dx.doi.org/10.3389/fpsyg.2022.1048132 |
_version_ | 1784846132998832128 |
---|---|
author | Han, Tianyi Li, Dechao Ma, Xingcheng Hu, Nan |
author_facet | Han, Tianyi Li, Dechao Ma, Xingcheng Hu, Nan |
author_sort | Han, Tianyi |
collection | PubMed |
description | Translation and paraphrasing, as typical forms in second language (L2) communication, have been considered effective learning methods in second language acquisition (SLA). While many studies have investigated their similarities and differences in a process-oriented approach, little attention has been paid to the correlation in product quality between them, probably due to difficulties in assessing the quality of translation and paraphrasing. Current quality evaluation methods tend to be either subjective and one-sided or lack consistency and standardization. To address these limitations, we proposed preliminary evaluation frameworks for translation and paraphrasing by incorporating indices from natural language processing (NLP) tools into teachers’ rating rubrics and further compared the product quality of the two activities. Twenty-nine translators were recruited to perform a translation task (translating from Chinese to English) and a paraphrasing task (paraphrasing in English). Their output products were recorded by key-logging technique and graded by three professional translation teachers by using a 10-point Likert Scale. This rating process adopted rubrics consisting of both holistic and analytical assessments. Besides, indices containing textual features from lexical and syntactic levels were extracted from TAASSC and TAALES. We identified indices that effectively predicted product quality using Pearson’s correlation analysis and combined them with expert evaluation rubrics to establish NLP-assisted evaluation frameworks for translation and paraphrasing. With the help of these frameworks, we found a closely related performance between the two tasks, evidenced by several shared predictive indices in lexical sophistication and strong positive correlations between translated and paraphrased text quality according to all the rating metrics. These similarities suggest a shared language competence and mental strategies in different types of translation activities and perhaps in other forms of language tasks. Meanwhile, we also observed differences in the most salient textual features between translations and paraphrases, mainly due to the different processing costs required by the two tasks. These findings enrich our understanding of the shared ground and divergences in product quality between translation and paraphrasing and shed light on the pedagogical application of translation activities in classroom teaching. Moreover, the proposed evaluation framework can also bring insights into the development of standardized evaluation frameworks in translation and paraphrasing in the future. |
format | Online Article Text |
id | pubmed-9732433 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-97324332022-12-10 Comparing product quality between translation and paraphrasing: Using NLP-assisted evaluation frameworks Han, Tianyi Li, Dechao Ma, Xingcheng Hu, Nan Front Psychol Psychology Translation and paraphrasing, as typical forms in second language (L2) communication, have been considered effective learning methods in second language acquisition (SLA). While many studies have investigated their similarities and differences in a process-oriented approach, little attention has been paid to the correlation in product quality between them, probably due to difficulties in assessing the quality of translation and paraphrasing. Current quality evaluation methods tend to be either subjective and one-sided or lack consistency and standardization. To address these limitations, we proposed preliminary evaluation frameworks for translation and paraphrasing by incorporating indices from natural language processing (NLP) tools into teachers’ rating rubrics and further compared the product quality of the two activities. Twenty-nine translators were recruited to perform a translation task (translating from Chinese to English) and a paraphrasing task (paraphrasing in English). Their output products were recorded by key-logging technique and graded by three professional translation teachers by using a 10-point Likert Scale. This rating process adopted rubrics consisting of both holistic and analytical assessments. Besides, indices containing textual features from lexical and syntactic levels were extracted from TAASSC and TAALES. We identified indices that effectively predicted product quality using Pearson’s correlation analysis and combined them with expert evaluation rubrics to establish NLP-assisted evaluation frameworks for translation and paraphrasing. With the help of these frameworks, we found a closely related performance between the two tasks, evidenced by several shared predictive indices in lexical sophistication and strong positive correlations between translated and paraphrased text quality according to all the rating metrics. These similarities suggest a shared language competence and mental strategies in different types of translation activities and perhaps in other forms of language tasks. Meanwhile, we also observed differences in the most salient textual features between translations and paraphrases, mainly due to the different processing costs required by the two tasks. These findings enrich our understanding of the shared ground and divergences in product quality between translation and paraphrasing and shed light on the pedagogical application of translation activities in classroom teaching. Moreover, the proposed evaluation framework can also bring insights into the development of standardized evaluation frameworks in translation and paraphrasing in the future. Frontiers Media S.A. 2022-11-25 /pmc/articles/PMC9732433/ /pubmed/36506993 http://dx.doi.org/10.3389/fpsyg.2022.1048132 Text en Copyright © 2022 Han, Li, Ma and Hu. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Psychology Han, Tianyi Li, Dechao Ma, Xingcheng Hu, Nan Comparing product quality between translation and paraphrasing: Using NLP-assisted evaluation frameworks |
title | Comparing product quality between translation and paraphrasing: Using NLP-assisted evaluation frameworks |
title_full | Comparing product quality between translation and paraphrasing: Using NLP-assisted evaluation frameworks |
title_fullStr | Comparing product quality between translation and paraphrasing: Using NLP-assisted evaluation frameworks |
title_full_unstemmed | Comparing product quality between translation and paraphrasing: Using NLP-assisted evaluation frameworks |
title_short | Comparing product quality between translation and paraphrasing: Using NLP-assisted evaluation frameworks |
title_sort | comparing product quality between translation and paraphrasing: using nlp-assisted evaluation frameworks |
topic | Psychology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9732433/ https://www.ncbi.nlm.nih.gov/pubmed/36506993 http://dx.doi.org/10.3389/fpsyg.2022.1048132 |
work_keys_str_mv | AT hantianyi comparingproductqualitybetweentranslationandparaphrasingusingnlpassistedevaluationframeworks AT lidechao comparingproductqualitybetweentranslationandparaphrasingusingnlpassistedevaluationframeworks AT maxingcheng comparingproductqualitybetweentranslationandparaphrasingusingnlpassistedevaluationframeworks AT hunan comparingproductqualitybetweentranslationandparaphrasingusingnlpassistedevaluationframeworks |