Cargando…

Evaluation of automatically generated English vocabulary questions

This paper describes details of the evaluation experiments for questions created by an automatic question generation system. Given a target word and one of its word senses, the system generates a multiple-choice English vocabulary question asking for the closest in meaning to the target word in the...

Descripción completa

Detalles Bibliográficos
Autores principales: Susanti, Yuni, Tokunaga, Takenobu, Nishikawa, Hitoshi, Obari, Hiroyuki
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Singapore 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6302865/
https://www.ncbi.nlm.nih.gov/pubmed/30613260
http://dx.doi.org/10.1186/s41039-017-0051-y
Descripción
Sumario:This paper describes details of the evaluation experiments for questions created by an automatic question generation system. Given a target word and one of its word senses, the system generates a multiple-choice English vocabulary question asking for the closest in meaning to the target word in the reading passage. Two kinds of evaluation were conducted considering two aspects: (1) measuring English learners’ proficiency and (2) their similarity to the human-made questions. The first evaluation is based on the responses from English learners obtained through administering the machine-generated and human-made questions to them, and the second is based on the subjective judgement by English teachers. Both evaluations showed that the machine-generated questions were able to achieve a comparable level with the human-made questions in both measuring English proficiency and similarity.