Cargando…
Automatic Scoring for Translations Based on Language Models
With the development of English education, translation scoring has gradually become a time-consuming and energy-consuming task, and it is difficult to ensure objectivity because of the subjective factors in manual correcting. Due to the similarity between the quality evaluation of responses generate...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Hindawi
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9256364/ https://www.ncbi.nlm.nih.gov/pubmed/35800703 http://dx.doi.org/10.1155/2022/2171206 |
_version_ | 1784741095318487040 |
---|---|
author | Wu, Diming Wang, Mingke Li, Xiaomin |
author_facet | Wu, Diming Wang, Mingke Li, Xiaomin |
author_sort | Wu, Diming |
collection | PubMed |
description | With the development of English education, translation scoring has gradually become a time-consuming and energy-consuming task, and it is difficult to ensure objectivity because of the subjective factors in manual correcting. Due to the similarity between the quality evaluation of responses generated by the dialogue system and the translation results submitted by students, we selected two metrics of dialogue to automatically score the translations, which are applied in a case study. The experiments show that the hybrid scores of two metrics are close to human scores. In conclusion, the method is feasible to apply the evaluation metrics of dialogue systems to translation scoring, and it can provide an improvement idea for the automatic scoring of translations in the future. |
format | Online Article Text |
id | pubmed-9256364 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Hindawi |
record_format | MEDLINE/PubMed |
spelling | pubmed-92563642022-07-06 Automatic Scoring for Translations Based on Language Models Wu, Diming Wang, Mingke Li, Xiaomin Comput Intell Neurosci Research Article With the development of English education, translation scoring has gradually become a time-consuming and energy-consuming task, and it is difficult to ensure objectivity because of the subjective factors in manual correcting. Due to the similarity between the quality evaluation of responses generated by the dialogue system and the translation results submitted by students, we selected two metrics of dialogue to automatically score the translations, which are applied in a case study. The experiments show that the hybrid scores of two metrics are close to human scores. In conclusion, the method is feasible to apply the evaluation metrics of dialogue systems to translation scoring, and it can provide an improvement idea for the automatic scoring of translations in the future. Hindawi 2022-06-28 /pmc/articles/PMC9256364/ /pubmed/35800703 http://dx.doi.org/10.1155/2022/2171206 Text en Copyright © 2022 Diming Wu et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Research Article Wu, Diming Wang, Mingke Li, Xiaomin Automatic Scoring for Translations Based on Language Models |
title | Automatic Scoring for Translations Based on Language Models |
title_full | Automatic Scoring for Translations Based on Language Models |
title_fullStr | Automatic Scoring for Translations Based on Language Models |
title_full_unstemmed | Automatic Scoring for Translations Based on Language Models |
title_short | Automatic Scoring for Translations Based on Language Models |
title_sort | automatic scoring for translations based on language models |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9256364/ https://www.ncbi.nlm.nih.gov/pubmed/35800703 http://dx.doi.org/10.1155/2022/2171206 |
work_keys_str_mv | AT wudiming automaticscoringfortranslationsbasedonlanguagemodels AT wangmingke automaticscoringfortranslationsbasedonlanguagemodels AT lixiaomin automaticscoringfortranslationsbasedonlanguagemodels |