Cargando…

Learning ultrasound rendering from cross-sectional model slices for simulated training

PURPOSE: Given the high level of expertise required for navigation and interpretation of ultrasound images, computational simulations can facilitate the training of such skills in virtual reality. With ray-tracing based simulations, realistic ultrasound images can be generated. However, due to compu...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhang, Lin, Portenier, Tiziano, Goksel, Orcun
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer International Publishing 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8134288/
https://www.ncbi.nlm.nih.gov/pubmed/33834348
http://dx.doi.org/10.1007/s11548-021-02349-6
_version_ 1783695163878539264
author Zhang, Lin
Portenier, Tiziano
Goksel, Orcun
author_facet Zhang, Lin
Portenier, Tiziano
Goksel, Orcun
author_sort Zhang, Lin
collection PubMed
description PURPOSE: Given the high level of expertise required for navigation and interpretation of ultrasound images, computational simulations can facilitate the training of such skills in virtual reality. With ray-tracing based simulations, realistic ultrasound images can be generated. However, due to computational constraints for interactivity, image quality typically needs to be compromised. METHODS: We propose herein to bypass any rendering and simulation process at interactive time, by conducting such simulations during a non-time-critical offline stage and then learning image translation from cross-sectional model slices to such simulated frames. We use a generative adversarial framework with a dedicated generator architecture and input feeding scheme, which both substantially improve image quality without increase in network parameters. Integral attenuation maps derived from cross-sectional model slices, texture-friendly strided convolutions, providing stochastic noise and input maps to intermediate layers in order to preserve locality are all shown herein to greatly facilitate such translation task. RESULTS: Given several quality metrics, the proposed method with only tissue maps as input is shown to provide comparable or superior results to a state-of-the-art that uses additional images of low-quality ultrasound renderings. An extensive ablation study shows the need and benefits from the individual contributions utilized in this work, based on qualitative examples and quantitative ultrasound similarity metrics. To that end, a local histogram statistics based error metric is proposed and demonstrated for visualization of local dissimilarities between ultrasound images. CONCLUSION: A deep-learning based direct transformation from interactive tissue slices to likeness of high quality renderings allow to obviate any complex rendering process in real-time, which could enable extremely realistic ultrasound simulations on consumer-hardware by moving the time-intensive processes to a one-time, offline, preprocessing data preparation stage that can be performed on dedicated high-end hardware.
format Online
Article
Text
id pubmed-8134288
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Springer International Publishing
record_format MEDLINE/PubMed
spelling pubmed-81342882021-05-24 Learning ultrasound rendering from cross-sectional model slices for simulated training Zhang, Lin Portenier, Tiziano Goksel, Orcun Int J Comput Assist Radiol Surg Original Article PURPOSE: Given the high level of expertise required for navigation and interpretation of ultrasound images, computational simulations can facilitate the training of such skills in virtual reality. With ray-tracing based simulations, realistic ultrasound images can be generated. However, due to computational constraints for interactivity, image quality typically needs to be compromised. METHODS: We propose herein to bypass any rendering and simulation process at interactive time, by conducting such simulations during a non-time-critical offline stage and then learning image translation from cross-sectional model slices to such simulated frames. We use a generative adversarial framework with a dedicated generator architecture and input feeding scheme, which both substantially improve image quality without increase in network parameters. Integral attenuation maps derived from cross-sectional model slices, texture-friendly strided convolutions, providing stochastic noise and input maps to intermediate layers in order to preserve locality are all shown herein to greatly facilitate such translation task. RESULTS: Given several quality metrics, the proposed method with only tissue maps as input is shown to provide comparable or superior results to a state-of-the-art that uses additional images of low-quality ultrasound renderings. An extensive ablation study shows the need and benefits from the individual contributions utilized in this work, based on qualitative examples and quantitative ultrasound similarity metrics. To that end, a local histogram statistics based error metric is proposed and demonstrated for visualization of local dissimilarities between ultrasound images. CONCLUSION: A deep-learning based direct transformation from interactive tissue slices to likeness of high quality renderings allow to obviate any complex rendering process in real-time, which could enable extremely realistic ultrasound simulations on consumer-hardware by moving the time-intensive processes to a one-time, offline, preprocessing data preparation stage that can be performed on dedicated high-end hardware. Springer International Publishing 2021-04-08 2021 /pmc/articles/PMC8134288/ /pubmed/33834348 http://dx.doi.org/10.1007/s11548-021-02349-6 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Original Article
Zhang, Lin
Portenier, Tiziano
Goksel, Orcun
Learning ultrasound rendering from cross-sectional model slices for simulated training
title Learning ultrasound rendering from cross-sectional model slices for simulated training
title_full Learning ultrasound rendering from cross-sectional model slices for simulated training
title_fullStr Learning ultrasound rendering from cross-sectional model slices for simulated training
title_full_unstemmed Learning ultrasound rendering from cross-sectional model slices for simulated training
title_short Learning ultrasound rendering from cross-sectional model slices for simulated training
title_sort learning ultrasound rendering from cross-sectional model slices for simulated training
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8134288/
https://www.ncbi.nlm.nih.gov/pubmed/33834348
http://dx.doi.org/10.1007/s11548-021-02349-6
work_keys_str_mv AT zhanglin learningultrasoundrenderingfromcrosssectionalmodelslicesforsimulatedtraining
AT porteniertiziano learningultrasoundrenderingfromcrosssectionalmodelslicesforsimulatedtraining
AT gokselorcun learningultrasoundrenderingfromcrosssectionalmodelslicesforsimulatedtraining