Cargando…
Self-supervised monocular depth estimation for high field of view colonoscopy cameras
Optical colonoscopy is the gold standard procedure to detect colorectal cancer, the fourth most common cancer in the United Kingdom. Up to 22%–28% of polyps can be missed during the procedure that is associated with interval cancer. A vision-based autonomous soft endorobot for colonoscopy can drasti...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10407791/ https://www.ncbi.nlm.nih.gov/pubmed/37559569 http://dx.doi.org/10.3389/frobt.2023.1212525 |
_version_ | 1785086040498765824 |
---|---|
author | Mathew, Alwyn Magerand, Ludovic Trucco, Emanuele Manfredi, Luigi |
author_facet | Mathew, Alwyn Magerand, Ludovic Trucco, Emanuele Manfredi, Luigi |
author_sort | Mathew, Alwyn |
collection | PubMed |
description | Optical colonoscopy is the gold standard procedure to detect colorectal cancer, the fourth most common cancer in the United Kingdom. Up to 22%–28% of polyps can be missed during the procedure that is associated with interval cancer. A vision-based autonomous soft endorobot for colonoscopy can drastically improve the accuracy of the procedure by inspecting the colon more systematically with reduced discomfort. A three-dimensional understanding of the environment is essential for robot navigation and can also improve the adenoma detection rate. Monocular depth estimation with deep learning methods has progressed substantially, but collecting ground-truth depth maps remains a challenge as no 3D camera can be fitted to a standard colonoscope. This work addresses this issue by using a self-supervised monocular depth estimation model that directly learns depth from video sequences with view synthesis. In addition, our model accommodates wide field-of-view cameras typically used in colonoscopy and specific challenges such as deformable surfaces, specular lighting, non-Lambertian surfaces, and high occlusion. We performed qualitative analysis on a synthetic data set, a quantitative examination of the colonoscopy training model, and real colonoscopy videos in near real-time. |
format | Online Article Text |
id | pubmed-10407791 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-104077912023-08-09 Self-supervised monocular depth estimation for high field of view colonoscopy cameras Mathew, Alwyn Magerand, Ludovic Trucco, Emanuele Manfredi, Luigi Front Robot AI Robotics and AI Optical colonoscopy is the gold standard procedure to detect colorectal cancer, the fourth most common cancer in the United Kingdom. Up to 22%–28% of polyps can be missed during the procedure that is associated with interval cancer. A vision-based autonomous soft endorobot for colonoscopy can drastically improve the accuracy of the procedure by inspecting the colon more systematically with reduced discomfort. A three-dimensional understanding of the environment is essential for robot navigation and can also improve the adenoma detection rate. Monocular depth estimation with deep learning methods has progressed substantially, but collecting ground-truth depth maps remains a challenge as no 3D camera can be fitted to a standard colonoscope. This work addresses this issue by using a self-supervised monocular depth estimation model that directly learns depth from video sequences with view synthesis. In addition, our model accommodates wide field-of-view cameras typically used in colonoscopy and specific challenges such as deformable surfaces, specular lighting, non-Lambertian surfaces, and high occlusion. We performed qualitative analysis on a synthetic data set, a quantitative examination of the colonoscopy training model, and real colonoscopy videos in near real-time. Frontiers Media S.A. 2023-07-25 /pmc/articles/PMC10407791/ /pubmed/37559569 http://dx.doi.org/10.3389/frobt.2023.1212525 Text en Copyright © 2023 Mathew, Magerand, Trucco and Manfredi. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Robotics and AI Mathew, Alwyn Magerand, Ludovic Trucco, Emanuele Manfredi, Luigi Self-supervised monocular depth estimation for high field of view colonoscopy cameras |
title | Self-supervised monocular depth estimation for high field of view colonoscopy cameras |
title_full | Self-supervised monocular depth estimation for high field of view colonoscopy cameras |
title_fullStr | Self-supervised monocular depth estimation for high field of view colonoscopy cameras |
title_full_unstemmed | Self-supervised monocular depth estimation for high field of view colonoscopy cameras |
title_short | Self-supervised monocular depth estimation for high field of view colonoscopy cameras |
title_sort | self-supervised monocular depth estimation for high field of view colonoscopy cameras |
topic | Robotics and AI |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10407791/ https://www.ncbi.nlm.nih.gov/pubmed/37559569 http://dx.doi.org/10.3389/frobt.2023.1212525 |
work_keys_str_mv | AT mathewalwyn selfsupervisedmonoculardepthestimationforhighfieldofviewcolonoscopycameras AT magerandludovic selfsupervisedmonoculardepthestimationforhighfieldofviewcolonoscopycameras AT truccoemanuele selfsupervisedmonoculardepthestimationforhighfieldofviewcolonoscopycameras AT manfrediluigi selfsupervisedmonoculardepthestimationforhighfieldofviewcolonoscopycameras |