Cargando…

Impact of Noisy Labels on Dental Deep Learning—Calculus Detection on Bitewing Radiographs

Supervised deep learning requires labelled data. On medical images, data is often labelled inconsistently (e.g., too large) with varying accuracies. We aimed to assess the impact of such label noise on dental calculus detection on bitewing radiographs. On 2584 bitewings calculus was accurately label...

Descripción completa

Detalles Bibliográficos
Autores principales: Büttner, Martha, Schneider, Lisa, Krasowski, Aleksander, Krois, Joachim, Feldberg, Ben, Schwendicke, Falk
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10179289/
https://www.ncbi.nlm.nih.gov/pubmed/37176499
http://dx.doi.org/10.3390/jcm12093058
_version_ 1785041062805372928
author Büttner, Martha
Schneider, Lisa
Krasowski, Aleksander
Krois, Joachim
Feldberg, Ben
Schwendicke, Falk
author_facet Büttner, Martha
Schneider, Lisa
Krasowski, Aleksander
Krois, Joachim
Feldberg, Ben
Schwendicke, Falk
author_sort Büttner, Martha
collection PubMed
description Supervised deep learning requires labelled data. On medical images, data is often labelled inconsistently (e.g., too large) with varying accuracies. We aimed to assess the impact of such label noise on dental calculus detection on bitewing radiographs. On 2584 bitewings calculus was accurately labeled using bounding boxes (BBs) and artificially increased and decreased stepwise, resulting in 30 consistently and 9 inconsistently noisy datasets. An object detection network (YOLOv5) was trained on each dataset and evaluated on noisy and accurate test data. Training on accurately labeled data yielded an mAP50: 0.77 (SD: 0.01). When trained on consistently too small BBs model performance significantly decreased on accurate and noisy test data. Model performance trained on consistently too large BBs decreased immediately on accurate test data (e.g., 200% BBs: mAP50: 0.24; SD: 0.05; p < 0.05), but only after drastically increasing BBs on noisy test data (e.g., 70,000%: mAP50: 0.75; SD: 0.01; p < 0.05). Models trained on inconsistent BB sizes showed a significant decrease of performance when deviating 20% or more from the original when tested on noisy data (mAP50: 0.74; SD: 0.02; p < 0.05), or 30% or more when tested on accurate data (mAP50: 0.76; SD: 0.01; p < 0.05). In conclusion, accurate predictions need accurate labeled data in the training process. Testing on noisy data may disguise the effects of noisy training data. Researchers should be aware of the relevance of accurately annotated data, especially when testing model performances.
format Online
Article
Text
id pubmed-10179289
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-101792892023-05-13 Impact of Noisy Labels on Dental Deep Learning—Calculus Detection on Bitewing Radiographs Büttner, Martha Schneider, Lisa Krasowski, Aleksander Krois, Joachim Feldberg, Ben Schwendicke, Falk J Clin Med Article Supervised deep learning requires labelled data. On medical images, data is often labelled inconsistently (e.g., too large) with varying accuracies. We aimed to assess the impact of such label noise on dental calculus detection on bitewing radiographs. On 2584 bitewings calculus was accurately labeled using bounding boxes (BBs) and artificially increased and decreased stepwise, resulting in 30 consistently and 9 inconsistently noisy datasets. An object detection network (YOLOv5) was trained on each dataset and evaluated on noisy and accurate test data. Training on accurately labeled data yielded an mAP50: 0.77 (SD: 0.01). When trained on consistently too small BBs model performance significantly decreased on accurate and noisy test data. Model performance trained on consistently too large BBs decreased immediately on accurate test data (e.g., 200% BBs: mAP50: 0.24; SD: 0.05; p < 0.05), but only after drastically increasing BBs on noisy test data (e.g., 70,000%: mAP50: 0.75; SD: 0.01; p < 0.05). Models trained on inconsistent BB sizes showed a significant decrease of performance when deviating 20% or more from the original when tested on noisy data (mAP50: 0.74; SD: 0.02; p < 0.05), or 30% or more when tested on accurate data (mAP50: 0.76; SD: 0.01; p < 0.05). In conclusion, accurate predictions need accurate labeled data in the training process. Testing on noisy data may disguise the effects of noisy training data. Researchers should be aware of the relevance of accurately annotated data, especially when testing model performances. MDPI 2023-04-23 /pmc/articles/PMC10179289/ /pubmed/37176499 http://dx.doi.org/10.3390/jcm12093058 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Büttner, Martha
Schneider, Lisa
Krasowski, Aleksander
Krois, Joachim
Feldberg, Ben
Schwendicke, Falk
Impact of Noisy Labels on Dental Deep Learning—Calculus Detection on Bitewing Radiographs
title Impact of Noisy Labels on Dental Deep Learning—Calculus Detection on Bitewing Radiographs
title_full Impact of Noisy Labels on Dental Deep Learning—Calculus Detection on Bitewing Radiographs
title_fullStr Impact of Noisy Labels on Dental Deep Learning—Calculus Detection on Bitewing Radiographs
title_full_unstemmed Impact of Noisy Labels on Dental Deep Learning—Calculus Detection on Bitewing Radiographs
title_short Impact of Noisy Labels on Dental Deep Learning—Calculus Detection on Bitewing Radiographs
title_sort impact of noisy labels on dental deep learning—calculus detection on bitewing radiographs
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10179289/
https://www.ncbi.nlm.nih.gov/pubmed/37176499
http://dx.doi.org/10.3390/jcm12093058
work_keys_str_mv AT buttnermartha impactofnoisylabelsondentaldeeplearningcalculusdetectiononbitewingradiographs
AT schneiderlisa impactofnoisylabelsondentaldeeplearningcalculusdetectiononbitewingradiographs
AT krasowskialeksander impactofnoisylabelsondentaldeeplearningcalculusdetectiononbitewingradiographs
AT kroisjoachim impactofnoisylabelsondentaldeeplearningcalculusdetectiononbitewingradiographs
AT feldbergben impactofnoisylabelsondentaldeeplearningcalculusdetectiononbitewingradiographs
AT schwendickefalk impactofnoisylabelsondentaldeeplearningcalculusdetectiononbitewingradiographs