Cargando…

A learning robot for cognitive camera control in minimally invasive surgery

BACKGROUND: We demonstrate the first self-learning, context-sensitive, autonomous camera-guiding robot applicable to minimally invasive surgery. The majority of surgical robots nowadays are telemanipulators without autonomous capabilities. Autonomous systems have been developed for laparoscopic came...

Descripción completa

Detalles Bibliográficos
Autores principales: Wagner, Martin, Bihlmaier, Andreas, Kenngott, Hannes Götz, Mietkowski, Patrick, Scheikl, Paul Maria, Bodenstedt, Sebastian, Schiepe-Tiska, Anja, Vetter, Josephin, Nickel, Felix, Speidel, S., Wörn, H., Mathis-Ullrich, F., Müller-Stich, B. P.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8346448/
https://www.ncbi.nlm.nih.gov/pubmed/33904989
http://dx.doi.org/10.1007/s00464-021-08509-8
Descripción
Sumario:BACKGROUND: We demonstrate the first self-learning, context-sensitive, autonomous camera-guiding robot applicable to minimally invasive surgery. The majority of surgical robots nowadays are telemanipulators without autonomous capabilities. Autonomous systems have been developed for laparoscopic camera guidance, however following simple rules and not adapting their behavior to specific tasks, procedures, or surgeons. METHODS: The herein presented methodology allows different robot kinematics to perceive their environment, interpret it according to a knowledge base and perform context-aware actions. For training, twenty operations were conducted with human camera guidance by a single surgeon. Subsequently, we experimentally evaluated the cognitive robotic camera control. A VIKY EP system and a KUKA LWR 4 robot were trained on data from manual camera guidance after completion of the surgeon’s learning curve. Second, only data from VIKY EP were used to train the LWR and finally data from training with the LWR were used to re-train the LWR. RESULTS: The duration of each operation decreased with the robot’s increasing experience from 1704 s ± 244 s to 1406 s ± 112 s, and 1197 s. Camera guidance quality (good/neutral/poor) improved from 38.6/53.4/7.9 to 49.4/46.3/4.1% and 56.2/41.0/2.8%. CONCLUSIONS: The cognitive camera robot improved its performance with experience, laying the foundation for a new generation of cognitive surgical robots that adapt to a surgeon’s needs. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s00464-021-08509-8.