Cargando…
Impact of Image Context on Deep Learning for Classification of Teeth on Radiographs
Objectives: We aimed to assess the impact of image context information on the accuracy of deep learning models for tooth classification on panoramic dental radiographs. Methods: Our dataset contained 5008 panoramic radiographs with a mean number of 25.2 teeth per image. Teeth were segmented bounding...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8068972/ https://www.ncbi.nlm.nih.gov/pubmed/33921440 http://dx.doi.org/10.3390/jcm10081635 |
_version_ | 1783683128015978496 |
---|---|
author | Krois, Joachim Schneider, Lisa Schwendicke, Falk |
author_facet | Krois, Joachim Schneider, Lisa Schwendicke, Falk |
author_sort | Krois, Joachim |
collection | PubMed |
description | Objectives: We aimed to assess the impact of image context information on the accuracy of deep learning models for tooth classification on panoramic dental radiographs. Methods: Our dataset contained 5008 panoramic radiographs with a mean number of 25.2 teeth per image. Teeth were segmented bounding-box-wise and classified by one expert; this was validated by another expert. Tooth segments were cropped allowing for different context; the baseline size was 100% of each box and was scaled up to capture 150%, 200%, 250% and 300% to increase context. On each of the five generated datasets, ResNet-34 classification models were trained using the Adam optimizer with a learning rate of 0.001 over 25 epochs with a batch size of 16. A total of 20% of the data was used for testing; in subgroup analyses, models were tested only on specific tooth types. Feature visualization using gradient-weighted class activation mapping (Grad-CAM) was employed to visualize salient areas. Results: F1-scores increased monotonically from 0.77 in the base-case (100%) to 0.93 on the largest segments (300%; p = 0.0083; Mann–Kendall-test). Gains in accuracy were limited between 200% and 300%. This behavior was found for all tooth types except canines, where accuracy was much higher even for smaller segments and increasing context yielded only minimal gains. With increasing context salient areas were more widely distributed over each segment; at maximum segment size, the models assessed minimum 3–4 teeth as well as the interdental or inter-arch space to come to a classification. Conclusions: Context matters; classification accuracy increased significantly with increasing context. |
format | Online Article Text |
id | pubmed-8068972 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-80689722021-04-26 Impact of Image Context on Deep Learning for Classification of Teeth on Radiographs Krois, Joachim Schneider, Lisa Schwendicke, Falk J Clin Med Article Objectives: We aimed to assess the impact of image context information on the accuracy of deep learning models for tooth classification on panoramic dental radiographs. Methods: Our dataset contained 5008 panoramic radiographs with a mean number of 25.2 teeth per image. Teeth were segmented bounding-box-wise and classified by one expert; this was validated by another expert. Tooth segments were cropped allowing for different context; the baseline size was 100% of each box and was scaled up to capture 150%, 200%, 250% and 300% to increase context. On each of the five generated datasets, ResNet-34 classification models were trained using the Adam optimizer with a learning rate of 0.001 over 25 epochs with a batch size of 16. A total of 20% of the data was used for testing; in subgroup analyses, models were tested only on specific tooth types. Feature visualization using gradient-weighted class activation mapping (Grad-CAM) was employed to visualize salient areas. Results: F1-scores increased monotonically from 0.77 in the base-case (100%) to 0.93 on the largest segments (300%; p = 0.0083; Mann–Kendall-test). Gains in accuracy were limited between 200% and 300%. This behavior was found for all tooth types except canines, where accuracy was much higher even for smaller segments and increasing context yielded only minimal gains. With increasing context salient areas were more widely distributed over each segment; at maximum segment size, the models assessed minimum 3–4 teeth as well as the interdental or inter-arch space to come to a classification. Conclusions: Context matters; classification accuracy increased significantly with increasing context. MDPI 2021-04-12 /pmc/articles/PMC8068972/ /pubmed/33921440 http://dx.doi.org/10.3390/jcm10081635 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Krois, Joachim Schneider, Lisa Schwendicke, Falk Impact of Image Context on Deep Learning for Classification of Teeth on Radiographs |
title | Impact of Image Context on Deep Learning for Classification of Teeth on Radiographs |
title_full | Impact of Image Context on Deep Learning for Classification of Teeth on Radiographs |
title_fullStr | Impact of Image Context on Deep Learning for Classification of Teeth on Radiographs |
title_full_unstemmed | Impact of Image Context on Deep Learning for Classification of Teeth on Radiographs |
title_short | Impact of Image Context on Deep Learning for Classification of Teeth on Radiographs |
title_sort | impact of image context on deep learning for classification of teeth on radiographs |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8068972/ https://www.ncbi.nlm.nih.gov/pubmed/33921440 http://dx.doi.org/10.3390/jcm10081635 |
work_keys_str_mv | AT kroisjoachim impactofimagecontextondeeplearningforclassificationofteethonradiographs AT schneiderlisa impactofimagecontextondeeplearningforclassificationofteethonradiographs AT schwendickefalk impactofimagecontextondeeplearningforclassificationofteethonradiographs |