Cargando…

Gaze-contingent perceptually enabled interactions in the operating theatre

PURPOSE: Improved surgical outcome and patient safety in the operating theatre are constant challenges. We hypothesise that a framework that collects and utilises information —especially perceptually enabled ones—from multiple sources, could help to meet the above goals. This paper presents some cor...

Descripción completa

Detalles Bibliográficos
Autores principales: Kogkas, Alexandros A., Darzi, Ara, Mylonas, George P.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer International Publishing 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5509830/
https://www.ncbi.nlm.nih.gov/pubmed/28397111
http://dx.doi.org/10.1007/s11548-017-1580-y
_version_ 1783250084836671488
author Kogkas, Alexandros A.
Darzi, Ara
Mylonas, George P.
author_facet Kogkas, Alexandros A.
Darzi, Ara
Mylonas, George P.
author_sort Kogkas, Alexandros A.
collection PubMed
description PURPOSE: Improved surgical outcome and patient safety in the operating theatre are constant challenges. We hypothesise that a framework that collects and utilises information —especially perceptually enabled ones—from multiple sources, could help to meet the above goals. This paper presents some core functionalities of a wider low-cost framework under development that allows perceptually enabled interaction within the surgical environment. METHODS: The synergy of wearable eye-tracking and advanced computer vision methodologies, such as SLAM, is exploited. As a demonstration of one of the framework’s possible functionalities, an articulated collaborative robotic arm and laser pointer is integrated and the set-up is used to project the surgeon’s fixation point in 3D space. RESULTS: The implementation is evaluated over 60 fixations on predefined targets, with distances between the subject and the targets of 92–212 cm and between the robot and the targets of 42–193 cm. The median overall system error is currently 3.98 cm. Its real-time potential is also highlighted. CONCLUSIONS: The work presented here represents an introduction and preliminary experimental validation of core functionalities of a larger framework under development. The proposed framework is geared towards a safer and more efficient surgical theatre. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s11548-017-1580-y) contains supplementary material, which is available to authorized users.
format Online
Article
Text
id pubmed-5509830
institution National Center for Biotechnology Information
language English
publishDate 2017
publisher Springer International Publishing
record_format MEDLINE/PubMed
spelling pubmed-55098302017-07-28 Gaze-contingent perceptually enabled interactions in the operating theatre Kogkas, Alexandros A. Darzi, Ara Mylonas, George P. Int J Comput Assist Radiol Surg Original Article PURPOSE: Improved surgical outcome and patient safety in the operating theatre are constant challenges. We hypothesise that a framework that collects and utilises information —especially perceptually enabled ones—from multiple sources, could help to meet the above goals. This paper presents some core functionalities of a wider low-cost framework under development that allows perceptually enabled interaction within the surgical environment. METHODS: The synergy of wearable eye-tracking and advanced computer vision methodologies, such as SLAM, is exploited. As a demonstration of one of the framework’s possible functionalities, an articulated collaborative robotic arm and laser pointer is integrated and the set-up is used to project the surgeon’s fixation point in 3D space. RESULTS: The implementation is evaluated over 60 fixations on predefined targets, with distances between the subject and the targets of 92–212 cm and between the robot and the targets of 42–193 cm. The median overall system error is currently 3.98 cm. Its real-time potential is also highlighted. CONCLUSIONS: The work presented here represents an introduction and preliminary experimental validation of core functionalities of a larger framework under development. The proposed framework is geared towards a safer and more efficient surgical theatre. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s11548-017-1580-y) contains supplementary material, which is available to authorized users. Springer International Publishing 2017-04-10 2017 /pmc/articles/PMC5509830/ /pubmed/28397111 http://dx.doi.org/10.1007/s11548-017-1580-y Text en © The Author(s) 2017 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
spellingShingle Original Article
Kogkas, Alexandros A.
Darzi, Ara
Mylonas, George P.
Gaze-contingent perceptually enabled interactions in the operating theatre
title Gaze-contingent perceptually enabled interactions in the operating theatre
title_full Gaze-contingent perceptually enabled interactions in the operating theatre
title_fullStr Gaze-contingent perceptually enabled interactions in the operating theatre
title_full_unstemmed Gaze-contingent perceptually enabled interactions in the operating theatre
title_short Gaze-contingent perceptually enabled interactions in the operating theatre
title_sort gaze-contingent perceptually enabled interactions in the operating theatre
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5509830/
https://www.ncbi.nlm.nih.gov/pubmed/28397111
http://dx.doi.org/10.1007/s11548-017-1580-y
work_keys_str_mv AT kogkasalexandrosa gazecontingentperceptuallyenabledinteractionsintheoperatingtheatre
AT darziara gazecontingentperceptuallyenabledinteractionsintheoperatingtheatre
AT mylonasgeorgep gazecontingentperceptuallyenabledinteractionsintheoperatingtheatre