Cargando…

Local Style Preservation in Improved GAN-Driven Synthetic Image Generation for Endoscopic Tool Segmentation

Accurate semantic image segmentation from medical imaging can enable intelligent vision-based assistance in robot-assisted minimally invasive surgery. The human body and surgical procedures are highly dynamic. While machine-vision presents a promising approach, sufficiently large training image sets...

Descripción completa

Detalles Bibliográficos
Autores principales: Su, Yun-Hsuan, Jiang, Wenfan, Chitrakar, Digesh, Huang, Kevin, Peng, Haonan, Hannaford, Blake
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8346972/
https://www.ncbi.nlm.nih.gov/pubmed/34372398
http://dx.doi.org/10.3390/s21155163
_version_ 1783734971429552128
author Su, Yun-Hsuan
Jiang, Wenfan
Chitrakar, Digesh
Huang, Kevin
Peng, Haonan
Hannaford, Blake
author_facet Su, Yun-Hsuan
Jiang, Wenfan
Chitrakar, Digesh
Huang, Kevin
Peng, Haonan
Hannaford, Blake
author_sort Su, Yun-Hsuan
collection PubMed
description Accurate semantic image segmentation from medical imaging can enable intelligent vision-based assistance in robot-assisted minimally invasive surgery. The human body and surgical procedures are highly dynamic. While machine-vision presents a promising approach, sufficiently large training image sets for robust performance are either costly or unavailable. This work examines three novel generative adversarial network (GAN) methods of providing usable synthetic tool images using only surgical background images and a few real tool images. The best of these three novel approaches generates realistic tool textures while preserving local background content by incorporating both a style preservation and a content loss component into the proposed multi-level loss function. The approach is quantitatively evaluated, and results suggest that the synthetically generated training tool images enhance UNet tool segmentation performance. More specifically, with a random set of 100 cadaver and live endoscopic images from the University of Washington Sinus Dataset, the UNet trained with synthetically generated images using the presented method resulted in 35.7% and 30.6% improvement over using purely real images in mean Dice coefficient and Intersection over Union scores, respectively. This study is promising towards the use of more widely available and routine screening endoscopy to preoperatively generate synthetic training tool images for intraoperative UNet tool segmentation.
format Online
Article
Text
id pubmed-8346972
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-83469722021-08-08 Local Style Preservation in Improved GAN-Driven Synthetic Image Generation for Endoscopic Tool Segmentation Su, Yun-Hsuan Jiang, Wenfan Chitrakar, Digesh Huang, Kevin Peng, Haonan Hannaford, Blake Sensors (Basel) Article Accurate semantic image segmentation from medical imaging can enable intelligent vision-based assistance in robot-assisted minimally invasive surgery. The human body and surgical procedures are highly dynamic. While machine-vision presents a promising approach, sufficiently large training image sets for robust performance are either costly or unavailable. This work examines three novel generative adversarial network (GAN) methods of providing usable synthetic tool images using only surgical background images and a few real tool images. The best of these three novel approaches generates realistic tool textures while preserving local background content by incorporating both a style preservation and a content loss component into the proposed multi-level loss function. The approach is quantitatively evaluated, and results suggest that the synthetically generated training tool images enhance UNet tool segmentation performance. More specifically, with a random set of 100 cadaver and live endoscopic images from the University of Washington Sinus Dataset, the UNet trained with synthetically generated images using the presented method resulted in 35.7% and 30.6% improvement over using purely real images in mean Dice coefficient and Intersection over Union scores, respectively. This study is promising towards the use of more widely available and routine screening endoscopy to preoperatively generate synthetic training tool images for intraoperative UNet tool segmentation. MDPI 2021-07-30 /pmc/articles/PMC8346972/ /pubmed/34372398 http://dx.doi.org/10.3390/s21155163 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Su, Yun-Hsuan
Jiang, Wenfan
Chitrakar, Digesh
Huang, Kevin
Peng, Haonan
Hannaford, Blake
Local Style Preservation in Improved GAN-Driven Synthetic Image Generation for Endoscopic Tool Segmentation
title Local Style Preservation in Improved GAN-Driven Synthetic Image Generation for Endoscopic Tool Segmentation
title_full Local Style Preservation in Improved GAN-Driven Synthetic Image Generation for Endoscopic Tool Segmentation
title_fullStr Local Style Preservation in Improved GAN-Driven Synthetic Image Generation for Endoscopic Tool Segmentation
title_full_unstemmed Local Style Preservation in Improved GAN-Driven Synthetic Image Generation for Endoscopic Tool Segmentation
title_short Local Style Preservation in Improved GAN-Driven Synthetic Image Generation for Endoscopic Tool Segmentation
title_sort local style preservation in improved gan-driven synthetic image generation for endoscopic tool segmentation
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8346972/
https://www.ncbi.nlm.nih.gov/pubmed/34372398
http://dx.doi.org/10.3390/s21155163
work_keys_str_mv AT suyunhsuan localstylepreservationinimprovedgandrivensyntheticimagegenerationforendoscopictoolsegmentation
AT jiangwenfan localstylepreservationinimprovedgandrivensyntheticimagegenerationforendoscopictoolsegmentation
AT chitrakardigesh localstylepreservationinimprovedgandrivensyntheticimagegenerationforendoscopictoolsegmentation
AT huangkevin localstylepreservationinimprovedgandrivensyntheticimagegenerationforendoscopictoolsegmentation
AT penghaonan localstylepreservationinimprovedgandrivensyntheticimagegenerationforendoscopictoolsegmentation
AT hannafordblake localstylepreservationinimprovedgandrivensyntheticimagegenerationforendoscopictoolsegmentation