Cargando…

Automated language essay scoring systems: a literature review

BACKGROUND: Writing composition is a significant factor for measuring test-takers’ ability in any language exam. However, the assessment (scoring) of these writing compositions or essays is a very challenging process in terms of reliability and time. The need for objective and quick scores has raise...

Descripción completa

Detalles Bibliográficos
Autores principales: Hussein, Mohamed Abdellatif, Hassan, Hesham, Nassef, Mohammad
Formato: Online Artículo Texto
Lenguaje:English
Publicado: PeerJ Inc. 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7924549/
https://www.ncbi.nlm.nih.gov/pubmed/33816861
http://dx.doi.org/10.7717/peerj-cs.208
_version_ 1783659112048885760
author Hussein, Mohamed Abdellatif
Hassan, Hesham
Nassef, Mohammad
author_facet Hussein, Mohamed Abdellatif
Hassan, Hesham
Nassef, Mohammad
author_sort Hussein, Mohamed Abdellatif
collection PubMed
description BACKGROUND: Writing composition is a significant factor for measuring test-takers’ ability in any language exam. However, the assessment (scoring) of these writing compositions or essays is a very challenging process in terms of reliability and time. The need for objective and quick scores has raised the need for a computer system that can automatically grade essay questions targeting specific prompts. Automated Essay Scoring (AES) systems are used to overcome the challenges of scoring writing tasks by using Natural Language Processing (NLP) and machine learning techniques. The purpose of this paper is to review the literature for the AES systems used for grading the essay questions. METHODOLOGY: We have reviewed the existing literature using Google Scholar, EBSCO and ERIC to search for the terms “AES”, “Automated Essay Scoring”, “Automated Essay Grading”, or “Automatic Essay” for essays written in English language. Two categories have been identified: handcrafted features and automatically featured AES systems. The systems of the former category are closely bonded to the quality of the designed features. On the other hand, the systems of the latter category are based on the automatic learning of the features and relations between an essay and its score without any handcrafted features. We reviewed the systems of the two categories in terms of system primary focus, technique(s) used in the system, the need for training data, instructional application (feedback system), and the correlation between e-scores and human scores. The paper includes three main sections. First, we present a structured literature review of the available Handcrafted Features AES systems. Second, we present a structured literature review of the available Automatic Featuring AES systems. Finally, we draw a set of discussions and conclusions. RESULTS: AES models have been found to utilize a broad range of manually-tuned shallow and deep linguistic features. AES systems have many strengths in reducing labor-intensive marking activities, ensuring a consistent application of scoring criteria, and ensuring the objectivity of scoring. Although many techniques have been implemented to improve the AES systems, three primary challenges have been identified. The challenges are lacking of the sense of the rater as a person, the potential that the systems can be deceived into giving a lower or higher score to an essay than it deserves, and the limited ability to assess the creativity of the ideas and propositions and evaluate their practicality. Many techniques have only been used to address the first two challenges.
format Online
Article
Text
id pubmed-7924549
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher PeerJ Inc.
record_format MEDLINE/PubMed
spelling pubmed-79245492021-04-02 Automated language essay scoring systems: a literature review Hussein, Mohamed Abdellatif Hassan, Hesham Nassef, Mohammad PeerJ Comput Sci Artificial Intelligence BACKGROUND: Writing composition is a significant factor for measuring test-takers’ ability in any language exam. However, the assessment (scoring) of these writing compositions or essays is a very challenging process in terms of reliability and time. The need for objective and quick scores has raised the need for a computer system that can automatically grade essay questions targeting specific prompts. Automated Essay Scoring (AES) systems are used to overcome the challenges of scoring writing tasks by using Natural Language Processing (NLP) and machine learning techniques. The purpose of this paper is to review the literature for the AES systems used for grading the essay questions. METHODOLOGY: We have reviewed the existing literature using Google Scholar, EBSCO and ERIC to search for the terms “AES”, “Automated Essay Scoring”, “Automated Essay Grading”, or “Automatic Essay” for essays written in English language. Two categories have been identified: handcrafted features and automatically featured AES systems. The systems of the former category are closely bonded to the quality of the designed features. On the other hand, the systems of the latter category are based on the automatic learning of the features and relations between an essay and its score without any handcrafted features. We reviewed the systems of the two categories in terms of system primary focus, technique(s) used in the system, the need for training data, instructional application (feedback system), and the correlation between e-scores and human scores. The paper includes three main sections. First, we present a structured literature review of the available Handcrafted Features AES systems. Second, we present a structured literature review of the available Automatic Featuring AES systems. Finally, we draw a set of discussions and conclusions. RESULTS: AES models have been found to utilize a broad range of manually-tuned shallow and deep linguistic features. AES systems have many strengths in reducing labor-intensive marking activities, ensuring a consistent application of scoring criteria, and ensuring the objectivity of scoring. Although many techniques have been implemented to improve the AES systems, three primary challenges have been identified. The challenges are lacking of the sense of the rater as a person, the potential that the systems can be deceived into giving a lower or higher score to an essay than it deserves, and the limited ability to assess the creativity of the ideas and propositions and evaluate their practicality. Many techniques have only been used to address the first two challenges. PeerJ Inc. 2019-08-12 /pmc/articles/PMC7924549/ /pubmed/33816861 http://dx.doi.org/10.7717/peerj-cs.208 Text en ©2019 Hussein et al. https://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited.
spellingShingle Artificial Intelligence
Hussein, Mohamed Abdellatif
Hassan, Hesham
Nassef, Mohammad
Automated language essay scoring systems: a literature review
title Automated language essay scoring systems: a literature review
title_full Automated language essay scoring systems: a literature review
title_fullStr Automated language essay scoring systems: a literature review
title_full_unstemmed Automated language essay scoring systems: a literature review
title_short Automated language essay scoring systems: a literature review
title_sort automated language essay scoring systems: a literature review
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7924549/
https://www.ncbi.nlm.nih.gov/pubmed/33816861
http://dx.doi.org/10.7717/peerj-cs.208
work_keys_str_mv AT husseinmohamedabdellatif automatedlanguageessayscoringsystemsaliteraturereview
AT hassanhesham automatedlanguageessayscoringsystemsaliteraturereview
AT nassefmohammad automatedlanguageessayscoringsystemsaliteraturereview