Cargando…
An Intelligent Diagnostic Model for Melasma Based on Deep Learning and Multimode Image Input
INTRODUCTION: The diagnosis of melasma is often based on the naked-eye judgment of physicians. However, this is a challenge for inexperienced physicians and non-professionals, and incorrect treatment might have serious consequences. Therefore, it is important to develop an accurate method for melasm...
Autores principales: | , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer Healthcare
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9884721/ https://www.ncbi.nlm.nih.gov/pubmed/36577888 http://dx.doi.org/10.1007/s13555-022-00874-z |
_version_ | 1784879780491952128 |
---|---|
author | Liu, Lin Liang, Chen Xue, Yuzhou Chen, Tingqiao Chen, Yangmei Lan, Yufan Wen, Jiamei Shao, Xinyi Chen, Jin |
author_facet | Liu, Lin Liang, Chen Xue, Yuzhou Chen, Tingqiao Chen, Yangmei Lan, Yufan Wen, Jiamei Shao, Xinyi Chen, Jin |
author_sort | Liu, Lin |
collection | PubMed |
description | INTRODUCTION: The diagnosis of melasma is often based on the naked-eye judgment of physicians. However, this is a challenge for inexperienced physicians and non-professionals, and incorrect treatment might have serious consequences. Therefore, it is important to develop an accurate method for melasma diagnosis. The objective of this study is to develop and validate an intelligent diagnostic system based on deep learning for melasma images. METHODS: A total of 8010 images in the VISIA system, comprising 4005 images of patients with melasma and 4005 images of patients without melasma, were collected for training and testing. Inspired by four high-performance structures (i.e., DenseNet, ResNet, Swin Transformer, and MobileNet), the performances of deep learning models in melasma and non-melasma binary classifiers were evaluated. Furthermore, considering that there were five modes of images for each shot in VISIA, we fused these modes via multichannel image input in different combinations to explore whether multimode images could improve network performance. RESULTS: The proposed network based on DenseNet121 achieved the best performance with an accuracy of 93.68% and an area under the curve (AUC) of 97.86% on the test set for the melasma classifier. The results of the Gradient-weighted Class Activation Mapping showed that it was interpretable. In further experiments, for the five modes of the VISIA system, we found the best performing mode to be “BROWN SPOTS.” Additionally, the combination of “NORMAL,” “BROWN SPOTS,” and “UV SPOTS” modes significantly improved the network performance, achieving the highest accuracy of 97.4% and AUC of 99.28%. CONCLUSIONS: In summary, deep learning is feasible for diagnosing melasma. The proposed network not only has excellent performance with clinical images of melasma, but can also acquire high accuracy by using multiple modes of images in VISIA. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s13555-022-00874-z. |
format | Online Article Text |
id | pubmed-9884721 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Springer Healthcare |
record_format | MEDLINE/PubMed |
spelling | pubmed-98847212023-01-31 An Intelligent Diagnostic Model for Melasma Based on Deep Learning and Multimode Image Input Liu, Lin Liang, Chen Xue, Yuzhou Chen, Tingqiao Chen, Yangmei Lan, Yufan Wen, Jiamei Shao, Xinyi Chen, Jin Dermatol Ther (Heidelb) Original Research INTRODUCTION: The diagnosis of melasma is often based on the naked-eye judgment of physicians. However, this is a challenge for inexperienced physicians and non-professionals, and incorrect treatment might have serious consequences. Therefore, it is important to develop an accurate method for melasma diagnosis. The objective of this study is to develop and validate an intelligent diagnostic system based on deep learning for melasma images. METHODS: A total of 8010 images in the VISIA system, comprising 4005 images of patients with melasma and 4005 images of patients without melasma, were collected for training and testing. Inspired by four high-performance structures (i.e., DenseNet, ResNet, Swin Transformer, and MobileNet), the performances of deep learning models in melasma and non-melasma binary classifiers were evaluated. Furthermore, considering that there were five modes of images for each shot in VISIA, we fused these modes via multichannel image input in different combinations to explore whether multimode images could improve network performance. RESULTS: The proposed network based on DenseNet121 achieved the best performance with an accuracy of 93.68% and an area under the curve (AUC) of 97.86% on the test set for the melasma classifier. The results of the Gradient-weighted Class Activation Mapping showed that it was interpretable. In further experiments, for the five modes of the VISIA system, we found the best performing mode to be “BROWN SPOTS.” Additionally, the combination of “NORMAL,” “BROWN SPOTS,” and “UV SPOTS” modes significantly improved the network performance, achieving the highest accuracy of 97.4% and AUC of 99.28%. CONCLUSIONS: In summary, deep learning is feasible for diagnosing melasma. The proposed network not only has excellent performance with clinical images of melasma, but can also acquire high accuracy by using multiple modes of images in VISIA. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s13555-022-00874-z. Springer Healthcare 2022-12-28 /pmc/articles/PMC9884721/ /pubmed/36577888 http://dx.doi.org/10.1007/s13555-022-00874-z Text en © The Author(s) 2022 https://creativecommons.org/licenses/by-nc/4.0/Open AccessThis article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc/4.0/ (https://creativecommons.org/licenses/by-nc/4.0/) . |
spellingShingle | Original Research Liu, Lin Liang, Chen Xue, Yuzhou Chen, Tingqiao Chen, Yangmei Lan, Yufan Wen, Jiamei Shao, Xinyi Chen, Jin An Intelligent Diagnostic Model for Melasma Based on Deep Learning and Multimode Image Input |
title | An Intelligent Diagnostic Model for Melasma Based on Deep Learning and Multimode Image Input |
title_full | An Intelligent Diagnostic Model for Melasma Based on Deep Learning and Multimode Image Input |
title_fullStr | An Intelligent Diagnostic Model for Melasma Based on Deep Learning and Multimode Image Input |
title_full_unstemmed | An Intelligent Diagnostic Model for Melasma Based on Deep Learning and Multimode Image Input |
title_short | An Intelligent Diagnostic Model for Melasma Based on Deep Learning and Multimode Image Input |
title_sort | intelligent diagnostic model for melasma based on deep learning and multimode image input |
topic | Original Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9884721/ https://www.ncbi.nlm.nih.gov/pubmed/36577888 http://dx.doi.org/10.1007/s13555-022-00874-z |
work_keys_str_mv | AT liulin anintelligentdiagnosticmodelformelasmabasedondeeplearningandmultimodeimageinput AT liangchen anintelligentdiagnosticmodelformelasmabasedondeeplearningandmultimodeimageinput AT xueyuzhou anintelligentdiagnosticmodelformelasmabasedondeeplearningandmultimodeimageinput AT chentingqiao anintelligentdiagnosticmodelformelasmabasedondeeplearningandmultimodeimageinput AT chenyangmei anintelligentdiagnosticmodelformelasmabasedondeeplearningandmultimodeimageinput AT lanyufan anintelligentdiagnosticmodelformelasmabasedondeeplearningandmultimodeimageinput AT wenjiamei anintelligentdiagnosticmodelformelasmabasedondeeplearningandmultimodeimageinput AT shaoxinyi anintelligentdiagnosticmodelformelasmabasedondeeplearningandmultimodeimageinput AT chenjin anintelligentdiagnosticmodelformelasmabasedondeeplearningandmultimodeimageinput AT liulin intelligentdiagnosticmodelformelasmabasedondeeplearningandmultimodeimageinput AT liangchen intelligentdiagnosticmodelformelasmabasedondeeplearningandmultimodeimageinput AT xueyuzhou intelligentdiagnosticmodelformelasmabasedondeeplearningandmultimodeimageinput AT chentingqiao intelligentdiagnosticmodelformelasmabasedondeeplearningandmultimodeimageinput AT chenyangmei intelligentdiagnosticmodelformelasmabasedondeeplearningandmultimodeimageinput AT lanyufan intelligentdiagnosticmodelformelasmabasedondeeplearningandmultimodeimageinput AT wenjiamei intelligentdiagnosticmodelformelasmabasedondeeplearningandmultimodeimageinput AT shaoxinyi intelligentdiagnosticmodelformelasmabasedondeeplearningandmultimodeimageinput AT chenjin intelligentdiagnosticmodelformelasmabasedondeeplearningandmultimodeimageinput |