Cargando…
Dense deconvolution net: Multi path fusion and dense deconvolution for high resolution skin lesion segmentation
BACKGROUND: Dermoscopy imaging has been a routine examination approach for skin lesion diagnosis. Accurate segmentation is the first step for automatic dermoscopy image assessment. OBJECTIVE: The main challenges for skin lesion segmentation are numerous variations in viewpoint and scale of skin lesi...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
IOS Press
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6004941/ https://www.ncbi.nlm.nih.gov/pubmed/29758959 http://dx.doi.org/10.3233/THC-174633 |
Sumario: | BACKGROUND: Dermoscopy imaging has been a routine examination approach for skin lesion diagnosis. Accurate segmentation is the first step for automatic dermoscopy image assessment. OBJECTIVE: The main challenges for skin lesion segmentation are numerous variations in viewpoint and scale of skin lesion region. METHODS: To handle these challenges, we propose a novel skin lesion segmentation network via a very deep dense deconvolution network based on dermoscopic images. Specifically, the deep dense layer and generic multi-path Deep RefineNet are combined to improve the segmentation performance. The deep representation of all available layers is aggregated to form the global feature maps using skip connection. Also, the dense deconvolution layer is leveraged to capture diverse appearance features via the contextual information. Finally, we apply the dense deconvolution layer to smooth segmentation maps and obtain final high-resolution output. RESULTS: Our proposed method shows the superiority over the state-of-the-art approaches based on the public available 2016 and 2017 skin lesion challenge dataset and achieves the accuracy of 96.0% and 93.9%, which obtained a 6.0% and 1.2% increase over the traditional method, respectively. CONCLUSIONS: By utilizing Dense Deconvolution Net, the average time for processing one testing images with our proposed framework was 0.253 s. |
---|