Cargando…

An Optimization-Based Technology Applied for Face Skin Symptom Detection

Face recognition segmentation is very important for symptom detection, especially in the case of complex image backgrounds or noise. The complexity of the photo background, the clarity of the facial expressions, or the interference of other people’s faces can increase the difficulty of detection. Th...

Descripción completa

Detalles Bibliográficos
Autores principales: Liao, Yuan-Hsun, Chang, Po-Chun, Wang, Chun-Cheng, Li, Hsiao-Hui
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9778148/
https://www.ncbi.nlm.nih.gov/pubmed/36553920
http://dx.doi.org/10.3390/healthcare10122396
Descripción
Sumario:Face recognition segmentation is very important for symptom detection, especially in the case of complex image backgrounds or noise. The complexity of the photo background, the clarity of the facial expressions, or the interference of other people’s faces can increase the difficulty of detection. Therefore, in this paper, we have proposed a method to combine mask region-based convolutional neural networks (Mask R-CNN) with you only look once version 4 (YOLOv4) to identify facial symptoms by this new method. We use the face image dataset from the public image databases DermNet and Freepic as the training source for the model. Face segmentation was first applied with Mask R-CNN. Then the images were imported into ResNet-101, and the facial features were fused with region of interest (RoI) in the feature pyramid networks (FPN) structures. After removing the non-face features and noise, the face region has been accurately obtained. Next, the recognized face area and RoI data were used to identify facial symptoms (acne, freckle, and wrinkles) with YOLOv4. Finally, we use Mask R-CNN, and you only look once version 3 (YOLOv3) and YOLOv4 are matched to perform the performance analysis. Although, the facial images with symptoms are relatively few. We still use a limited amount of data to train the model. The experimental results show that our proposed method still achieves 57.73%, 60.38%, and 59.75% of mean average precision (mAP) for different amounts of data. Compared with other methods, the mAP was more than about 3%. Consequently, using the method proposed in this paper, facial symptoms can be effectively and accurately identified.