Cargando…
Interactive Echocardiography Translation Using Few-Shot GAN Transfer Learning
BACKGROUND: Interactive echocardiography translation is an efficient educational function to master cardiac anatomy. It strengthens the student's understanding by pixel-level translation between echocardiography and theoretically sketch images. Previous research studies split it into two aspect...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Hindawi
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7106869/ https://www.ncbi.nlm.nih.gov/pubmed/32256680 http://dx.doi.org/10.1155/2020/1487035 |
Sumario: | BACKGROUND: Interactive echocardiography translation is an efficient educational function to master cardiac anatomy. It strengthens the student's understanding by pixel-level translation between echocardiography and theoretically sketch images. Previous research studies split it into two aspects of image segmentation and synthesis. This split makes it hard to achieve pixel-level corresponding translation. Besides, it is also challenging to leverage deep-learning-based methods in each phase where a handful of annotations are available. METHODS: To address interactive translation with limited annotations, we present a two-step transfer learning approach. Firstly, we train two independent parent networks, the ultrasound to sketch (U2S) parent network and the sketch to ultrasound (S2U) parent network. U2S translation is similar to a segmentation task with sector boundary inference. Therefore, the U2S parent network is trained with the U-Net network on the public segmentation dataset of VOC2012. S2U aims at recovering ultrasound texture. So, the S2U parent network is decoder networks that generate ultrasound data from random input. After pretraining the parent networks, an encoder network is attached to the S2U parent network to translate ultrasound images into sketch images. We jointly transfer learning U2S and S2U within the CGAN framework. Results and conclusion. Quantitative and qualitative contrast from 1-shot, 5-shot, and 10-shot transfer learning show the effectiveness of the proposed algorithm. The interactive translation is achieved with few-shot transfer learning. Thus, the development of new applications from scratch is accelerated. Our few-shot transfer learning has great potential in the biomedical computer-aided image translation field, where annotation data are extremely precious. |
---|