Cargando…

Distilling vector space model scores for the assessment of constructed responses with bifactor Inbuilt Rubric method and latent variables

In this paper, we highlight the importance of distilling the computational assessments of constructed responses to validate the indicators/proxies of constructs/trins using an empirical illustration in automated summary evaluation. We present the validation of the Inbuilt Rubric (IR) method that map...

Descripción completa

Detalles Bibliográficos
Autores principales: Martínez-Huertas, José Ángel, Olmos, Ricardo, Jorge-Botana, Guillermo, León, José A.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9579084/
https://www.ncbi.nlm.nih.gov/pubmed/35018609
http://dx.doi.org/10.3758/s13428-021-01764-6
_version_ 1784812107379769344
author Martínez-Huertas, José Ángel
Olmos, Ricardo
Jorge-Botana, Guillermo
León, José A.
author_facet Martínez-Huertas, José Ángel
Olmos, Ricardo
Jorge-Botana, Guillermo
León, José A.
author_sort Martínez-Huertas, José Ángel
collection PubMed
description In this paper, we highlight the importance of distilling the computational assessments of constructed responses to validate the indicators/proxies of constructs/trins using an empirical illustration in automated summary evaluation. We present the validation of the Inbuilt Rubric (IR) method that maps rubrics into vector spaces for concepts’ assessment. Specifically, we improved and validated its scores’ performance using latent variables, a common approach in psychometrics. We also validated a new hierarchical vector space, namely a bifactor IR. 205 Spanish undergraduate students produced 615 summaries of three different texts that were evaluated by human raters and different versions of the IR method using latent semantic analysis (LSA). The computational scores were validated using multiple linear regressions and different latent variable models like CFAs or SEMs. Convergent and discriminant validity was found for the IR scores using human rater scores as validity criteria. While this study was conducted in the Spanish language, the proposed scheme is language-independent and applicable to any language. We highlight four main conclusions: (1) Accurate performance can be observed in topic-detection tasks without hundreds/thousands of pre-scored samples required in supervised models. (2) Convergent/discriminant validity can be improved using measurement models for computational scores as they adjust for measurement errors. (3) Nouns embedded in fragments of instructional text can be an affordable alternative to use the IR method. (4) Hierarchical models, like the bifactor IR, can increase the validity of computational assessments evaluating general and specific knowledge in vector space models. R code is provided to apply the classic and bifactor IR method.
format Online
Article
Text
id pubmed-9579084
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Springer US
record_format MEDLINE/PubMed
spelling pubmed-95790842022-10-20 Distilling vector space model scores for the assessment of constructed responses with bifactor Inbuilt Rubric method and latent variables Martínez-Huertas, José Ángel Olmos, Ricardo Jorge-Botana, Guillermo León, José A. Behav Res Methods Article In this paper, we highlight the importance of distilling the computational assessments of constructed responses to validate the indicators/proxies of constructs/trins using an empirical illustration in automated summary evaluation. We present the validation of the Inbuilt Rubric (IR) method that maps rubrics into vector spaces for concepts’ assessment. Specifically, we improved and validated its scores’ performance using latent variables, a common approach in psychometrics. We also validated a new hierarchical vector space, namely a bifactor IR. 205 Spanish undergraduate students produced 615 summaries of three different texts that were evaluated by human raters and different versions of the IR method using latent semantic analysis (LSA). The computational scores were validated using multiple linear regressions and different latent variable models like CFAs or SEMs. Convergent and discriminant validity was found for the IR scores using human rater scores as validity criteria. While this study was conducted in the Spanish language, the proposed scheme is language-independent and applicable to any language. We highlight four main conclusions: (1) Accurate performance can be observed in topic-detection tasks without hundreds/thousands of pre-scored samples required in supervised models. (2) Convergent/discriminant validity can be improved using measurement models for computational scores as they adjust for measurement errors. (3) Nouns embedded in fragments of instructional text can be an affordable alternative to use the IR method. (4) Hierarchical models, like the bifactor IR, can increase the validity of computational assessments evaluating general and specific knowledge in vector space models. R code is provided to apply the classic and bifactor IR method. Springer US 2022-01-11 2022 /pmc/articles/PMC9579084/ /pubmed/35018609 http://dx.doi.org/10.3758/s13428-021-01764-6 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Martínez-Huertas, José Ángel
Olmos, Ricardo
Jorge-Botana, Guillermo
León, José A.
Distilling vector space model scores for the assessment of constructed responses with bifactor Inbuilt Rubric method and latent variables
title Distilling vector space model scores for the assessment of constructed responses with bifactor Inbuilt Rubric method and latent variables
title_full Distilling vector space model scores for the assessment of constructed responses with bifactor Inbuilt Rubric method and latent variables
title_fullStr Distilling vector space model scores for the assessment of constructed responses with bifactor Inbuilt Rubric method and latent variables
title_full_unstemmed Distilling vector space model scores for the assessment of constructed responses with bifactor Inbuilt Rubric method and latent variables
title_short Distilling vector space model scores for the assessment of constructed responses with bifactor Inbuilt Rubric method and latent variables
title_sort distilling vector space model scores for the assessment of constructed responses with bifactor inbuilt rubric method and latent variables
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9579084/
https://www.ncbi.nlm.nih.gov/pubmed/35018609
http://dx.doi.org/10.3758/s13428-021-01764-6
work_keys_str_mv AT martinezhuertasjoseangel distillingvectorspacemodelscoresfortheassessmentofconstructedresponseswithbifactorinbuiltrubricmethodandlatentvariables
AT olmosricardo distillingvectorspacemodelscoresfortheassessmentofconstructedresponseswithbifactorinbuiltrubricmethodandlatentvariables
AT jorgebotanaguillermo distillingvectorspacemodelscoresfortheassessmentofconstructedresponseswithbifactorinbuiltrubricmethodandlatentvariables
AT leonjosea distillingvectorspacemodelscoresfortheassessmentofconstructedresponseswithbifactorinbuiltrubricmethodandlatentvariables