Cargando…

On reliability of annotations in contextual emotion imagery

We documented the relabeling process for a subset of a renowned database for emotion-in-context recognition, with the aim of promoting reliability in final labels. To this end, emotion categories were organized into eight groups, while a large number of participants was requested for tagging. A stri...

Descripción completa

Detalles Bibliográficos
Autores principales: Martínez-Miwa, Carlos A., Castelán, Mario
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10423283/
https://www.ncbi.nlm.nih.gov/pubmed/37573453
http://dx.doi.org/10.1038/s41597-023-02435-1
Descripción
Sumario:We documented the relabeling process for a subset of a renowned database for emotion-in-context recognition, with the aim of promoting reliability in final labels. To this end, emotion categories were organized into eight groups, while a large number of participants was requested for tagging. A strict control strategy was performed along the experiments, whose duration was 13.45 minutes average per day. Annotators were free to participate in any of the daily experiments (the average number of participants was 28), and a Z-Score filtering technique was implemented to keep trustworthiness of annotations. As a result, the value of the agreement parameter Fleiss’ Kapa increasingly varied from slight to almost perfect, revealing a coherent diversity of the experiments. Our results support the hypothesis that a small number of categories and a large number of voters benefit reliability of annotations in contextual emotion imagery.