Cargando…

Multi-scored sleep databases: how to exploit the multiple-labels in automated sleep scoring

STUDY OBJECTIVES: Inter-scorer variability in scoring polysomnograms is a well-known problem. Most of the existing automated sleep scoring systems are trained using labels annotated by a single-scorer, whose subjective evaluation is transferred to the model. When annotations from two or more scorers...

Descripción completa

Detalles Bibliográficos
Autores principales: Fiorillo, Luigi, Pedroncelli, Davide, Agostini, Valentina, Favaro, Paolo, Faraci, Francesca Dalia
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Oxford University Press 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10171642/
https://www.ncbi.nlm.nih.gov/pubmed/36762998
http://dx.doi.org/10.1093/sleep/zsad028
Descripción
Sumario:STUDY OBJECTIVES: Inter-scorer variability in scoring polysomnograms is a well-known problem. Most of the existing automated sleep scoring systems are trained using labels annotated by a single-scorer, whose subjective evaluation is transferred to the model. When annotations from two or more scorers are available, the scoring models are usually trained on the scorer consensus. The averaged scorer’s subjectivity is transferred into the model, losing information about the internal variability among different scorers. In this study, we aim to insert the multiple-knowledge of the different physicians into the training procedure. The goal is to optimize a model training, exploiting the full information that can be extracted from the consensus of a group of scorers. METHODS: We train two lightweight deep learning-based models on three different multi-scored databases. We exploit the label smoothing technique together with a soft-consensus (LS(SC)) distribution to insert the multiple-knowledge in the training procedure of the model. We introduce the averaged cosine similarity metric (ACS) to quantify the similarity between the hypnodensity-graph generated by the models with-LS(SC) and the hypnodensity-graph generated by the scorer consensus. RESULTS: The performance of the models improves on all the databases when we train the models with our LS(SC). We found an increase in ACS (up to 6.4%) between the hypnodensity-graph generated by the models trained with-LS(SC) and the hypnodensity-graph generated by the consensus. CONCLUSION: Our approach definitely enables a model to better adapt to the consensus of the group of scorers. Future work will focus on further investigations on different scoring architectures and hopefully large-scale-heterogeneous multi-scored datasets.