Cargando…
Semi-Supervised Learning for Defect Segmentation with Autoencoder Auxiliary Module
In general, one may have access to a handful of labeled normal and defect datasets. Most unlabeled datasets contain normal samples because the defect samples occurred rarely. Thus, the majority of approaches for anomaly detection are formed as unsupervised problems. Most of the previous methods have...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9030561/ https://www.ncbi.nlm.nih.gov/pubmed/35458900 http://dx.doi.org/10.3390/s22082915 |
_version_ | 1784692171459264512 |
---|---|
author | Sae-ang, Bee-ing Kumwilaisak, Wuttipong Kaewtrakulpong, Pakorn |
author_facet | Sae-ang, Bee-ing Kumwilaisak, Wuttipong Kaewtrakulpong, Pakorn |
author_sort | Sae-ang, Bee-ing |
collection | PubMed |
description | In general, one may have access to a handful of labeled normal and defect datasets. Most unlabeled datasets contain normal samples because the defect samples occurred rarely. Thus, the majority of approaches for anomaly detection are formed as unsupervised problems. Most of the previous methods have typically chosen an autoencoder to extract the common characteristics of the unlabeled dataset, assumed as normal characteristics, and determine the unsuccessfully reconstructed area as the defect area in an image. However, we could waste the ground truth data if we leave them unused. In addition, a suitable choice of threshold value is needed for anomaly segmentation. In our study, we propose a semi-supervised setting to make use of both unlabeled and labeled samples and the network is trained to segment out defect regions automatically. We first train an autoencoder network to reconstruct defect-free images from an unlabeled dataset, mostly containing normal samples. Then, a difference map between the input and the reconstructed image is calculated and feeds along with the corresponding input image into the subsequent segmentation module. We share the ground truth for both kinds of input and train the network with binary cross-entropy loss. Additional difference images can also increase stability during training. Finally, we show extensive experimental results to prove that, with help from a handful of ground-truth segmentation maps, the result is improved overall by 3.83%. |
format | Online Article Text |
id | pubmed-9030561 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-90305612022-04-23 Semi-Supervised Learning for Defect Segmentation with Autoencoder Auxiliary Module Sae-ang, Bee-ing Kumwilaisak, Wuttipong Kaewtrakulpong, Pakorn Sensors (Basel) Article In general, one may have access to a handful of labeled normal and defect datasets. Most unlabeled datasets contain normal samples because the defect samples occurred rarely. Thus, the majority of approaches for anomaly detection are formed as unsupervised problems. Most of the previous methods have typically chosen an autoencoder to extract the common characteristics of the unlabeled dataset, assumed as normal characteristics, and determine the unsuccessfully reconstructed area as the defect area in an image. However, we could waste the ground truth data if we leave them unused. In addition, a suitable choice of threshold value is needed for anomaly segmentation. In our study, we propose a semi-supervised setting to make use of both unlabeled and labeled samples and the network is trained to segment out defect regions automatically. We first train an autoencoder network to reconstruct defect-free images from an unlabeled dataset, mostly containing normal samples. Then, a difference map between the input and the reconstructed image is calculated and feeds along with the corresponding input image into the subsequent segmentation module. We share the ground truth for both kinds of input and train the network with binary cross-entropy loss. Additional difference images can also increase stability during training. Finally, we show extensive experimental results to prove that, with help from a handful of ground-truth segmentation maps, the result is improved overall by 3.83%. MDPI 2022-04-11 /pmc/articles/PMC9030561/ /pubmed/35458900 http://dx.doi.org/10.3390/s22082915 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Sae-ang, Bee-ing Kumwilaisak, Wuttipong Kaewtrakulpong, Pakorn Semi-Supervised Learning for Defect Segmentation with Autoencoder Auxiliary Module |
title | Semi-Supervised Learning for Defect Segmentation with Autoencoder Auxiliary Module |
title_full | Semi-Supervised Learning for Defect Segmentation with Autoencoder Auxiliary Module |
title_fullStr | Semi-Supervised Learning for Defect Segmentation with Autoencoder Auxiliary Module |
title_full_unstemmed | Semi-Supervised Learning for Defect Segmentation with Autoencoder Auxiliary Module |
title_short | Semi-Supervised Learning for Defect Segmentation with Autoencoder Auxiliary Module |
title_sort | semi-supervised learning for defect segmentation with autoencoder auxiliary module |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9030561/ https://www.ncbi.nlm.nih.gov/pubmed/35458900 http://dx.doi.org/10.3390/s22082915 |
work_keys_str_mv | AT saeangbeeing semisupervisedlearningfordefectsegmentationwithautoencoderauxiliarymodule AT kumwilaisakwuttipong semisupervisedlearningfordefectsegmentationwithautoencoderauxiliarymodule AT kaewtrakulpongpakorn semisupervisedlearningfordefectsegmentationwithautoencoderauxiliarymodule |