Cargando…

A framework for rigorous evaluation of human performance in human and machine learning comparison studies

Rigorous comparisons of human and machine learning algorithm performance on the same task help to support accurate claims about algorithm success rates and advances understanding of their performance relative to that of human performers. In turn, these comparisons are critical for supporting advance...

Descripción completa

Detalles Bibliográficos
Autores principales: Cowley, Hannah P., Natter, Mandy, Gray-Roncal, Karla, Rhodes, Rebecca E., Johnson, Erik C., Drenkow, Nathan, Shead, Timothy M., Chance, Frances S., Wester, Brock, Gray-Roncal, William
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8971503/
https://www.ncbi.nlm.nih.gov/pubmed/35361786
http://dx.doi.org/10.1038/s41598-022-08078-3
Descripción
Sumario:Rigorous comparisons of human and machine learning algorithm performance on the same task help to support accurate claims about algorithm success rates and advances understanding of their performance relative to that of human performers. In turn, these comparisons are critical for supporting advances in artificial intelligence. However, the machine learning community has lacked a standardized, consensus framework for performing the evaluations of human performance necessary for comparison. We demonstrate common pitfalls in a designing the human performance evaluation and propose a framework for the evaluation of human performance, illustrating guiding principles for a successful comparison. These principles are first, to design the human evaluation with an understanding of the differences between human and algorithm cognition; second, to match trials between human participants and the algorithm evaluation, and third, to employ best practices for psychology research studies, such as the collection and analysis of supplementary and subjective data and adhering to ethical review protocols. We demonstrate our framework’s utility for designing a study to evaluate human performance on a one-shot learning task. Adoption of this common framework may provide a standard approach to evaluate algorithm performance and aid in the reproducibility of comparisons between human and machine learning algorithm performance.