Cargando…
How do machine-generated questions compare to human-generated questions?
Science instructors need questions for use in exams, homework assignments, class discussions, reviews, and other instructional activities. Textbooks never have enough questions, so instructors must find them from other sources or generate their own questions. In order to supply biology instructors w...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer Singapore
2016
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6302853/ https://www.ncbi.nlm.nih.gov/pubmed/30613240 http://dx.doi.org/10.1186/s41039-016-0031-7 |
_version_ | 1783382065858740224 |
---|---|
author | Zhang, Lishan VanLehn, Kurt |
author_facet | Zhang, Lishan VanLehn, Kurt |
author_sort | Zhang, Lishan |
collection | PubMed |
description | Science instructors need questions for use in exams, homework assignments, class discussions, reviews, and other instructional activities. Textbooks never have enough questions, so instructors must find them from other sources or generate their own questions. In order to supply biology instructors with questions for college students in introductory biology classes, two algorithms were developed. One generates questions from a formal representation of photosynthesis knowledge. The other collects biology questions from the web. The questions generated by these two methods were compared to questions from biology textbooks. Human students rated questions for their relevance, fluency, ambiguity, pedagogy, and depth. Questions were also rated by the authors according to the topic of the questions. Although the exact pattern of results depends on analytic assumptions, it appears that there is little difference in the pedagogical benefits of each class, but the questions generated from the knowledge base may be shallower than questions written by professionals. This suggests that all three types of questions may work equally well for helping students to learn. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1186/s41039-016-0031-7) contains supplementary material, which is available to authorized users. |
format | Online Article Text |
id | pubmed-6302853 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2016 |
publisher | Springer Singapore |
record_format | MEDLINE/PubMed |
spelling | pubmed-63028532019-01-04 How do machine-generated questions compare to human-generated questions? Zhang, Lishan VanLehn, Kurt Res Pract Technol Enhanc Learn Research Science instructors need questions for use in exams, homework assignments, class discussions, reviews, and other instructional activities. Textbooks never have enough questions, so instructors must find them from other sources or generate their own questions. In order to supply biology instructors with questions for college students in introductory biology classes, two algorithms were developed. One generates questions from a formal representation of photosynthesis knowledge. The other collects biology questions from the web. The questions generated by these two methods were compared to questions from biology textbooks. Human students rated questions for their relevance, fluency, ambiguity, pedagogy, and depth. Questions were also rated by the authors according to the topic of the questions. Although the exact pattern of results depends on analytic assumptions, it appears that there is little difference in the pedagogical benefits of each class, but the questions generated from the knowledge base may be shallower than questions written by professionals. This suggests that all three types of questions may work equally well for helping students to learn. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1186/s41039-016-0031-7) contains supplementary material, which is available to authorized users. Springer Singapore 2016-03-24 2016 /pmc/articles/PMC6302853/ /pubmed/30613240 http://dx.doi.org/10.1186/s41039-016-0031-7 Text en © The Author(s) 2016 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. |
spellingShingle | Research Zhang, Lishan VanLehn, Kurt How do machine-generated questions compare to human-generated questions? |
title | How do machine-generated questions compare to human-generated questions? |
title_full | How do machine-generated questions compare to human-generated questions? |
title_fullStr | How do machine-generated questions compare to human-generated questions? |
title_full_unstemmed | How do machine-generated questions compare to human-generated questions? |
title_short | How do machine-generated questions compare to human-generated questions? |
title_sort | how do machine-generated questions compare to human-generated questions? |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6302853/ https://www.ncbi.nlm.nih.gov/pubmed/30613240 http://dx.doi.org/10.1186/s41039-016-0031-7 |
work_keys_str_mv | AT zhanglishan howdomachinegeneratedquestionscomparetohumangeneratedquestions AT vanlehnkurt howdomachinegeneratedquestionscomparetohumangeneratedquestions |