Cargando…

Automatic Classification of Screen Gaze and Dialogue in Doctor-Patient-Computer Interactions: Computational Ethnography Algorithm Development and Validation

BACKGROUND: The study of doctor-patient-computer interactions is a key research area for examining doctor-patient relationships; however, studying these interactions is costly and obtrusive as researchers usually set up complex mechanisms or intrude on consultations to collect, then manually analyze...

Descripción completa

Detalles Bibliográficos
Autores principales: Helou, Samar, Abou-Khalil, Victoria, Iacobucci, Riccardo, El Helou, Elie, Kiyono, Ken
Formato: Online Artículo Texto
Lenguaje:English
Publicado: JMIR Publications 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8145082/
https://www.ncbi.nlm.nih.gov/pubmed/33970117
http://dx.doi.org/10.2196/25218
_version_ 1783697097188442112
author Helou, Samar
Abou-Khalil, Victoria
Iacobucci, Riccardo
El Helou, Elie
Kiyono, Ken
author_facet Helou, Samar
Abou-Khalil, Victoria
Iacobucci, Riccardo
El Helou, Elie
Kiyono, Ken
author_sort Helou, Samar
collection PubMed
description BACKGROUND: The study of doctor-patient-computer interactions is a key research area for examining doctor-patient relationships; however, studying these interactions is costly and obtrusive as researchers usually set up complex mechanisms or intrude on consultations to collect, then manually analyze the data. OBJECTIVE: We aimed to facilitate human-computer and human-human interaction research in clinics by providing a computational ethnography tool: an unobtrusive automatic classifier of screen gaze and dialogue combinations in doctor-patient-computer interactions. METHODS: The classifier’s input is video taken by doctors using their computers' internal camera and microphone. By estimating the key points of the doctor's face and the presence of voice activity, we estimate the type of interaction that is taking place. The classification output of each video segment is 1 of 4 interaction classes: (1) screen gaze and dialogue, wherein the doctor is gazing at the computer screen while conversing with the patient; (2) dialogue, wherein the doctor is gazing away from the computer screen while conversing with the patient; (3) screen gaze, wherein the doctor is gazing at the computer screen without conversing with the patient; and (4) other, wherein no screen gaze or dialogue are detected. We evaluated the classifier using 30 minutes of video provided by 5 doctors simulating consultations in their clinics both in semi- and fully inclusive layouts. RESULTS: The classifier achieved an overall accuracy of 0.83, a performance similar to that of a human coder. Similar to the human coder, the classifier was more accurate in fully inclusive layouts than in semi-inclusive layouts. CONCLUSIONS: The proposed classifier can be used by researchers, care providers, designers, medical educators, and others who are interested in exploring and answering questions related to screen gaze and dialogue in doctor-patient-computer interactions.
format Online
Article
Text
id pubmed-8145082
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher JMIR Publications
record_format MEDLINE/PubMed
spelling pubmed-81450822021-06-11 Automatic Classification of Screen Gaze and Dialogue in Doctor-Patient-Computer Interactions: Computational Ethnography Algorithm Development and Validation Helou, Samar Abou-Khalil, Victoria Iacobucci, Riccardo El Helou, Elie Kiyono, Ken J Med Internet Res Original Paper BACKGROUND: The study of doctor-patient-computer interactions is a key research area for examining doctor-patient relationships; however, studying these interactions is costly and obtrusive as researchers usually set up complex mechanisms or intrude on consultations to collect, then manually analyze the data. OBJECTIVE: We aimed to facilitate human-computer and human-human interaction research in clinics by providing a computational ethnography tool: an unobtrusive automatic classifier of screen gaze and dialogue combinations in doctor-patient-computer interactions. METHODS: The classifier’s input is video taken by doctors using their computers' internal camera and microphone. By estimating the key points of the doctor's face and the presence of voice activity, we estimate the type of interaction that is taking place. The classification output of each video segment is 1 of 4 interaction classes: (1) screen gaze and dialogue, wherein the doctor is gazing at the computer screen while conversing with the patient; (2) dialogue, wherein the doctor is gazing away from the computer screen while conversing with the patient; (3) screen gaze, wherein the doctor is gazing at the computer screen without conversing with the patient; and (4) other, wherein no screen gaze or dialogue are detected. We evaluated the classifier using 30 minutes of video provided by 5 doctors simulating consultations in their clinics both in semi- and fully inclusive layouts. RESULTS: The classifier achieved an overall accuracy of 0.83, a performance similar to that of a human coder. Similar to the human coder, the classifier was more accurate in fully inclusive layouts than in semi-inclusive layouts. CONCLUSIONS: The proposed classifier can be used by researchers, care providers, designers, medical educators, and others who are interested in exploring and answering questions related to screen gaze and dialogue in doctor-patient-computer interactions. JMIR Publications 2021-05-10 /pmc/articles/PMC8145082/ /pubmed/33970117 http://dx.doi.org/10.2196/25218 Text en ©Samar Helou, Victoria Abou-Khalil, Riccardo Iacobucci, Elie El Helou, Ken Kiyono. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 10.05.2021. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.
spellingShingle Original Paper
Helou, Samar
Abou-Khalil, Victoria
Iacobucci, Riccardo
El Helou, Elie
Kiyono, Ken
Automatic Classification of Screen Gaze and Dialogue in Doctor-Patient-Computer Interactions: Computational Ethnography Algorithm Development and Validation
title Automatic Classification of Screen Gaze and Dialogue in Doctor-Patient-Computer Interactions: Computational Ethnography Algorithm Development and Validation
title_full Automatic Classification of Screen Gaze and Dialogue in Doctor-Patient-Computer Interactions: Computational Ethnography Algorithm Development and Validation
title_fullStr Automatic Classification of Screen Gaze and Dialogue in Doctor-Patient-Computer Interactions: Computational Ethnography Algorithm Development and Validation
title_full_unstemmed Automatic Classification of Screen Gaze and Dialogue in Doctor-Patient-Computer Interactions: Computational Ethnography Algorithm Development and Validation
title_short Automatic Classification of Screen Gaze and Dialogue in Doctor-Patient-Computer Interactions: Computational Ethnography Algorithm Development and Validation
title_sort automatic classification of screen gaze and dialogue in doctor-patient-computer interactions: computational ethnography algorithm development and validation
topic Original Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8145082/
https://www.ncbi.nlm.nih.gov/pubmed/33970117
http://dx.doi.org/10.2196/25218
work_keys_str_mv AT helousamar automaticclassificationofscreengazeanddialogueindoctorpatientcomputerinteractionscomputationalethnographyalgorithmdevelopmentandvalidation
AT aboukhalilvictoria automaticclassificationofscreengazeanddialogueindoctorpatientcomputerinteractionscomputationalethnographyalgorithmdevelopmentandvalidation
AT iacobucciriccardo automaticclassificationofscreengazeanddialogueindoctorpatientcomputerinteractionscomputationalethnographyalgorithmdevelopmentandvalidation
AT elhelouelie automaticclassificationofscreengazeanddialogueindoctorpatientcomputerinteractionscomputationalethnographyalgorithmdevelopmentandvalidation
AT kiyonoken automaticclassificationofscreengazeanddialogueindoctorpatientcomputerinteractionscomputationalethnographyalgorithmdevelopmentandvalidation