Cargando…

Generative Adversarial Network Image Synthesis Method for Skin Lesion Generation and Classification

BACKGROUND: One of the common limitations in the treatment of cancer is in the early detection of this disease. The customary medical practice of cancer examination is a visual examination by the dermatologist followed by an invasive biopsy. Nonetheless, this symptomatic approach is timeconsuming an...

Descripción completa

Detalles Bibliográficos
Autores principales: Mutepfe, Freedom, Kalejahi, Behnam Kiani, Meshgini, Saeed, Danishvar, Sebelan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Wolters Kluwer - Medknow 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8588886/
https://www.ncbi.nlm.nih.gov/pubmed/34820296
http://dx.doi.org/10.4103/jmss.JMSS_53_20
_version_ 1784598583428775936
author Mutepfe, Freedom
Kalejahi, Behnam Kiani
Meshgini, Saeed
Danishvar, Sebelan
author_facet Mutepfe, Freedom
Kalejahi, Behnam Kiani
Meshgini, Saeed
Danishvar, Sebelan
author_sort Mutepfe, Freedom
collection PubMed
description BACKGROUND: One of the common limitations in the treatment of cancer is in the early detection of this disease. The customary medical practice of cancer examination is a visual examination by the dermatologist followed by an invasive biopsy. Nonetheless, this symptomatic approach is timeconsuming and prone to human errors. An automated machine learning model is essential to capacitate fast diagnoses and early treatment. OBJECTIVE: The key objective of this study is to establish a fully automatic model that helps Dermatologists in skin cancer handling process in a way that could improve skin lesion classification accuracy. METHOD: The work is conducted following an implementation of a Deep Convolutional Generative Adversarial Network (DCGAN) using the Python-based deep learning library Keras. We incorporated effective image filtering and enhancement algorithms such as bilateral filter to enhance feature detection and extraction during training. The Deep Convolutional Generative Adversarial Network (DCGAN) needed slightly more fine-tuning to ripe a better return. Hyperparameter optimization was utilized for selecting the best-performed hyperparameter combinations and several network hyperparameters. In this work, we decreased the learning rate from the default 0.001 to 0.0002, and the momentum for Adam optimization algorithm from 0.9 to 0.5, in trying to reduce the instability issues related to GAN models and at each iteration the weights of the discriminative and generative network were updated to balance the loss between them. We endeavour to address a binary classification which predicts two classes present in our dataset, namely benign and malignant. More so, some wellknown metrics such as the receiver operating characteristic -area under the curve and confusion matrix were incorporated for evaluating the results and classification accuracy. RESULTS: The model generated very conceivable lesions during the early stages of the experiment and we could easily visualise a smooth transition in resolution along the way. Thus, we have achieved an overall test accuracy of 93.5% after fine-tuning most parameters of our network. CONCLUSION: This classification model provides spatial intelligence that could be useful in the future for cancer risk prediction. Unfortunately, it is difficult to generate high quality images that are much like the synthetic real samples and to compare different classification methods given the fact that some methods use non-public datasets for training.
format Online
Article
Text
id pubmed-8588886
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Wolters Kluwer - Medknow
record_format MEDLINE/PubMed
spelling pubmed-85888862021-11-23 Generative Adversarial Network Image Synthesis Method for Skin Lesion Generation and Classification Mutepfe, Freedom Kalejahi, Behnam Kiani Meshgini, Saeed Danishvar, Sebelan J Med Signals Sens Original Article BACKGROUND: One of the common limitations in the treatment of cancer is in the early detection of this disease. The customary medical practice of cancer examination is a visual examination by the dermatologist followed by an invasive biopsy. Nonetheless, this symptomatic approach is timeconsuming and prone to human errors. An automated machine learning model is essential to capacitate fast diagnoses and early treatment. OBJECTIVE: The key objective of this study is to establish a fully automatic model that helps Dermatologists in skin cancer handling process in a way that could improve skin lesion classification accuracy. METHOD: The work is conducted following an implementation of a Deep Convolutional Generative Adversarial Network (DCGAN) using the Python-based deep learning library Keras. We incorporated effective image filtering and enhancement algorithms such as bilateral filter to enhance feature detection and extraction during training. The Deep Convolutional Generative Adversarial Network (DCGAN) needed slightly more fine-tuning to ripe a better return. Hyperparameter optimization was utilized for selecting the best-performed hyperparameter combinations and several network hyperparameters. In this work, we decreased the learning rate from the default 0.001 to 0.0002, and the momentum for Adam optimization algorithm from 0.9 to 0.5, in trying to reduce the instability issues related to GAN models and at each iteration the weights of the discriminative and generative network were updated to balance the loss between them. We endeavour to address a binary classification which predicts two classes present in our dataset, namely benign and malignant. More so, some wellknown metrics such as the receiver operating characteristic -area under the curve and confusion matrix were incorporated for evaluating the results and classification accuracy. RESULTS: The model generated very conceivable lesions during the early stages of the experiment and we could easily visualise a smooth transition in resolution along the way. Thus, we have achieved an overall test accuracy of 93.5% after fine-tuning most parameters of our network. CONCLUSION: This classification model provides spatial intelligence that could be useful in the future for cancer risk prediction. Unfortunately, it is difficult to generate high quality images that are much like the synthetic real samples and to compare different classification methods given the fact that some methods use non-public datasets for training. Wolters Kluwer - Medknow 2021-10-20 /pmc/articles/PMC8588886/ /pubmed/34820296 http://dx.doi.org/10.4103/jmss.JMSS_53_20 Text en Copyright: © 2021 Journal of Medical Signals & Sensors https://creativecommons.org/licenses/by-nc-sa/4.0/This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms.
spellingShingle Original Article
Mutepfe, Freedom
Kalejahi, Behnam Kiani
Meshgini, Saeed
Danishvar, Sebelan
Generative Adversarial Network Image Synthesis Method for Skin Lesion Generation and Classification
title Generative Adversarial Network Image Synthesis Method for Skin Lesion Generation and Classification
title_full Generative Adversarial Network Image Synthesis Method for Skin Lesion Generation and Classification
title_fullStr Generative Adversarial Network Image Synthesis Method for Skin Lesion Generation and Classification
title_full_unstemmed Generative Adversarial Network Image Synthesis Method for Skin Lesion Generation and Classification
title_short Generative Adversarial Network Image Synthesis Method for Skin Lesion Generation and Classification
title_sort generative adversarial network image synthesis method for skin lesion generation and classification
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8588886/
https://www.ncbi.nlm.nih.gov/pubmed/34820296
http://dx.doi.org/10.4103/jmss.JMSS_53_20
work_keys_str_mv AT mutepfefreedom generativeadversarialnetworkimagesynthesismethodforskinlesiongenerationandclassification
AT kalejahibehnamkiani generativeadversarialnetworkimagesynthesismethodforskinlesiongenerationandclassification
AT meshginisaeed generativeadversarialnetworkimagesynthesismethodforskinlesiongenerationandclassification
AT danishvarsebelan generativeadversarialnetworkimagesynthesismethodforskinlesiongenerationandclassification