Cargando…

Auditory Display of Fluorescence Image Data in an In Vivo Tumor Model

Objectives: This research aims to apply an auditory display for tumor imaging using fluorescence data, discuss its feasibility for in vivo tumor evaluation, and check its potential for assisting enhanced cancer perception. Methods: Xenografted mice underwent fluorescence imaging after an injection o...

Descripción completa

Detalles Bibliográficos
Autores principales: Lee, Sheen-Woo, Lee, Sang Hoon, Cheng, Zhen, Yeo, Woon Seung
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9315571/
https://www.ncbi.nlm.nih.gov/pubmed/35885632
http://dx.doi.org/10.3390/diagnostics12071728
_version_ 1784754594943860736
author Lee, Sheen-Woo
Lee, Sang Hoon
Cheng, Zhen
Yeo, Woon Seung
author_facet Lee, Sheen-Woo
Lee, Sang Hoon
Cheng, Zhen
Yeo, Woon Seung
author_sort Lee, Sheen-Woo
collection PubMed
description Objectives: This research aims to apply an auditory display for tumor imaging using fluorescence data, discuss its feasibility for in vivo tumor evaluation, and check its potential for assisting enhanced cancer perception. Methods: Xenografted mice underwent fluorescence imaging after an injection of cy5.5-glucose. Spectral information from the raw data was parametrized to emphasize the near-infrared fluorescence information, and the resulting parameters were mapped to control a sound synthesis engine in order to provide the auditory display. Drag–click maneuvers using in-house data navigation software-generated sound from regions of interest (ROIs) in vivo. Results: Four different representations of the auditory display were acquired per ROI: (1) audio spectrum, (2) waveform, (3) numerical signal-to-noise ratio (SNR), and (4) sound itself. SNRs were compared for statistical analysis. Compared with the no-tumor area, the tumor area produced sounds with a heterogeneous spectrum and waveform, and featured a higher SNR as well (3.63 ± 8.41 vs. 0.42 ± 0.085, p < 0.05). Sound from the tumor was perceived by the naked ear as high-timbred and unpleasant. Conclusions: By accentuating the specific tumor spectrum, auditory display of fluorescence imaging data can generate sound which helps the listener to detect and discriminate small tumorous conditions in living animals. Despite some practical limitations, it can aid in the translation of fluorescent images by facilitating information transfer to the clinician in in vivo tumor imaging.
format Online
Article
Text
id pubmed-9315571
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-93155712022-07-27 Auditory Display of Fluorescence Image Data in an In Vivo Tumor Model Lee, Sheen-Woo Lee, Sang Hoon Cheng, Zhen Yeo, Woon Seung Diagnostics (Basel) Article Objectives: This research aims to apply an auditory display for tumor imaging using fluorescence data, discuss its feasibility for in vivo tumor evaluation, and check its potential for assisting enhanced cancer perception. Methods: Xenografted mice underwent fluorescence imaging after an injection of cy5.5-glucose. Spectral information from the raw data was parametrized to emphasize the near-infrared fluorescence information, and the resulting parameters were mapped to control a sound synthesis engine in order to provide the auditory display. Drag–click maneuvers using in-house data navigation software-generated sound from regions of interest (ROIs) in vivo. Results: Four different representations of the auditory display were acquired per ROI: (1) audio spectrum, (2) waveform, (3) numerical signal-to-noise ratio (SNR), and (4) sound itself. SNRs were compared for statistical analysis. Compared with the no-tumor area, the tumor area produced sounds with a heterogeneous spectrum and waveform, and featured a higher SNR as well (3.63 ± 8.41 vs. 0.42 ± 0.085, p < 0.05). Sound from the tumor was perceived by the naked ear as high-timbred and unpleasant. Conclusions: By accentuating the specific tumor spectrum, auditory display of fluorescence imaging data can generate sound which helps the listener to detect and discriminate small tumorous conditions in living animals. Despite some practical limitations, it can aid in the translation of fluorescent images by facilitating information transfer to the clinician in in vivo tumor imaging. MDPI 2022-07-16 /pmc/articles/PMC9315571/ /pubmed/35885632 http://dx.doi.org/10.3390/diagnostics12071728 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Lee, Sheen-Woo
Lee, Sang Hoon
Cheng, Zhen
Yeo, Woon Seung
Auditory Display of Fluorescence Image Data in an In Vivo Tumor Model
title Auditory Display of Fluorescence Image Data in an In Vivo Tumor Model
title_full Auditory Display of Fluorescence Image Data in an In Vivo Tumor Model
title_fullStr Auditory Display of Fluorescence Image Data in an In Vivo Tumor Model
title_full_unstemmed Auditory Display of Fluorescence Image Data in an In Vivo Tumor Model
title_short Auditory Display of Fluorescence Image Data in an In Vivo Tumor Model
title_sort auditory display of fluorescence image data in an in vivo tumor model
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9315571/
https://www.ncbi.nlm.nih.gov/pubmed/35885632
http://dx.doi.org/10.3390/diagnostics12071728
work_keys_str_mv AT leesheenwoo auditorydisplayoffluorescenceimagedatainaninvivotumormodel
AT leesanghoon auditorydisplayoffluorescenceimagedatainaninvivotumormodel
AT chengzhen auditorydisplayoffluorescenceimagedatainaninvivotumormodel
AT yeowoonseung auditorydisplayoffluorescenceimagedatainaninvivotumormodel