Cargando…

Sensor-to-Image Based Neural Networks: A Reliable Reconstruction Method for Diffuse Optical Imaging of High-Scattering Media

Imaging tasks today are being increasingly shifted toward deep learning-based solutions. Biomedical imaging problems are no exception toward this tendency. It is appealing to consider deep learning as an alternative to such a complex imaging task. Although research of deep learning-based solutions c...

Descripción completa

Detalles Bibliográficos
Autores principales: Yuliansyah, Diannata Rahman, Pan, Min-Chun, Hsu, Ya-Fen
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9741421/
https://www.ncbi.nlm.nih.gov/pubmed/36501794
http://dx.doi.org/10.3390/s22239096
Descripción
Sumario:Imaging tasks today are being increasingly shifted toward deep learning-based solutions. Biomedical imaging problems are no exception toward this tendency. It is appealing to consider deep learning as an alternative to such a complex imaging task. Although research of deep learning-based solutions continues to thrive, challenges still remain that limits the availability of these solutions in clinical practice. Diffuse optical tomography is a particularly challenging field since the problem is both ill-posed and ill-conditioned. To get a reconstructed image, various regularization-based models and procedures have been developed in the last three decades. In this study, a sensor-to-image based neural network for diffuse optical imaging has been developed as an alternative to the existing Tikhonov regularization (TR) method. It also provides a different structure compared to previous neural network approaches. We focus on realizing a complete image reconstruction function approximation (from sensor to image) by combining multiple deep learning architectures known in imaging fields that gives more capability to learn than the fully connected neural networks (FCNN) and/or convolutional neural networks (CNN) architectures. We use the idea of transformation from sensor- to image-domain similarly with AUTOMAP, and use the concept of an encoder, which is to learn a compressed representation of the inputs. Further, a U-net with skip connections to extract features and obtain the contrast image, is proposed and implemented. We designed a branching-like structure of the network that fully supports the ring-scanning measurement system, which means it can deal with various types of experimental data. The output images are obtained by multiplying the contrast images with the background coefficients. Our network is capable of producing attainable performance in both simulation and experiment cases, and is proven to be reliable to reconstruct non-synthesized data. Its apparent superior performance was compared with the results of the TR method and FCNN models. The proposed and implemented model is feasible to localize the inclusions with various conditions. The strategy created in this paper can be a promising alternative solution for clinical breast tumor imaging applications.