Cargando…
The role of automated evaluation techniques in online professional translator training
The rapid technologisation of translation has influenced the translation industry’s direction towards machine translation, post-editing, subtitling services and video content translation. Besides, the pandemic situation associated with COVID-19 has rapidly increased the transfer of business and educ...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
PeerJ Inc.
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8507487/ https://www.ncbi.nlm.nih.gov/pubmed/34712792 http://dx.doi.org/10.7717/peerj-cs.706 |
_version_ | 1784581867416059904 |
---|---|
author | Munkova, Dasa Munk, Michal Benko, Ľubomír Hajek, Petr |
author_facet | Munkova, Dasa Munk, Michal Benko, Ľubomír Hajek, Petr |
author_sort | Munkova, Dasa |
collection | PubMed |
description | The rapid technologisation of translation has influenced the translation industry’s direction towards machine translation, post-editing, subtitling services and video content translation. Besides, the pandemic situation associated with COVID-19 has rapidly increased the transfer of business and education to the virtual world. This situation has motivated us not only to look for new approaches to online translator training, which requires a different method than learning foreign languages but in particular to look for new approaches to assess translator performance within online educational environments. Translation quality assessment is a key task, as the concept of quality is closely linked to the concept of optimization. Automatic metrics are very good indicators of quality, but they do not provide sufficient and detailed linguistic information about translations or post-edited machine translations. However, using their residuals, we can identify the segments with the largest distances between the post-edited machine translations and machine translations, which allow us to focus on a more detailed textual analysis of suspicious segments. We introduce a unique online teaching and learning system, which is specifically “tailored” for online translators’ training and subsequently we focus on a new approach to assess translators’ competences using evaluation techniques—the metrics of automatic evaluation and their residuals. We show that the residuals of the metrics of accuracy (BLEU_n) and error rate (PER, WER, TER, CDER, and HTER) for machine translation post-editing are valid for translator assessment. Using the residuals of the metrics of accuracy and error rate, we can identify errors in post-editing (critical, major, and minor) and subsequently utilize them in more detailed linguistic analysis. |
format | Online Article Text |
id | pubmed-8507487 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | PeerJ Inc. |
record_format | MEDLINE/PubMed |
spelling | pubmed-85074872021-10-27 The role of automated evaluation techniques in online professional translator training Munkova, Dasa Munk, Michal Benko, Ľubomír Hajek, Petr PeerJ Comput Sci Computational Linguistics The rapid technologisation of translation has influenced the translation industry’s direction towards machine translation, post-editing, subtitling services and video content translation. Besides, the pandemic situation associated with COVID-19 has rapidly increased the transfer of business and education to the virtual world. This situation has motivated us not only to look for new approaches to online translator training, which requires a different method than learning foreign languages but in particular to look for new approaches to assess translator performance within online educational environments. Translation quality assessment is a key task, as the concept of quality is closely linked to the concept of optimization. Automatic metrics are very good indicators of quality, but they do not provide sufficient and detailed linguistic information about translations or post-edited machine translations. However, using their residuals, we can identify the segments with the largest distances between the post-edited machine translations and machine translations, which allow us to focus on a more detailed textual analysis of suspicious segments. We introduce a unique online teaching and learning system, which is specifically “tailored” for online translators’ training and subsequently we focus on a new approach to assess translators’ competences using evaluation techniques—the metrics of automatic evaluation and their residuals. We show that the residuals of the metrics of accuracy (BLEU_n) and error rate (PER, WER, TER, CDER, and HTER) for machine translation post-editing are valid for translator assessment. Using the residuals of the metrics of accuracy and error rate, we can identify errors in post-editing (critical, major, and minor) and subsequently utilize them in more detailed linguistic analysis. PeerJ Inc. 2021-10-04 /pmc/articles/PMC8507487/ /pubmed/34712792 http://dx.doi.org/10.7717/peerj-cs.706 Text en © 2021 Munkova et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited. |
spellingShingle | Computational Linguistics Munkova, Dasa Munk, Michal Benko, Ľubomír Hajek, Petr The role of automated evaluation techniques in online professional translator training |
title | The role of automated evaluation techniques in online professional translator training |
title_full | The role of automated evaluation techniques in online professional translator training |
title_fullStr | The role of automated evaluation techniques in online professional translator training |
title_full_unstemmed | The role of automated evaluation techniques in online professional translator training |
title_short | The role of automated evaluation techniques in online professional translator training |
title_sort | role of automated evaluation techniques in online professional translator training |
topic | Computational Linguistics |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8507487/ https://www.ncbi.nlm.nih.gov/pubmed/34712792 http://dx.doi.org/10.7717/peerj-cs.706 |
work_keys_str_mv | AT munkovadasa theroleofautomatedevaluationtechniquesinonlineprofessionaltranslatortraining AT munkmichal theroleofautomatedevaluationtechniquesinonlineprofessionaltranslatortraining AT benkolubomir theroleofautomatedevaluationtechniquesinonlineprofessionaltranslatortraining AT hajekpetr theroleofautomatedevaluationtechniquesinonlineprofessionaltranslatortraining AT munkovadasa roleofautomatedevaluationtechniquesinonlineprofessionaltranslatortraining AT munkmichal roleofautomatedevaluationtechniquesinonlineprofessionaltranslatortraining AT benkolubomir roleofautomatedevaluationtechniquesinonlineprofessionaltranslatortraining AT hajekpetr roleofautomatedevaluationtechniquesinonlineprofessionaltranslatortraining |