Cargando…

Content-Aware Eye Tracking for Autostereoscopic 3D Display

This study develops an eye tracking method for autostereoscopic three-dimensional (3D) display systems for use in various environments. The eye tracking-based autostereoscopic 3D display provides low crosstalk and high-resolution 3D image experience seamlessly without 3D eyeglasses by overcoming the...

Descripción completa

Detalles Bibliográficos
Autores principales: Kang, Dongwoo, Heo, Jingu
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7506879/
https://www.ncbi.nlm.nih.gov/pubmed/32854229
http://dx.doi.org/10.3390/s20174787
_version_ 1783585114220920832
author Kang, Dongwoo
Heo, Jingu
author_facet Kang, Dongwoo
Heo, Jingu
author_sort Kang, Dongwoo
collection PubMed
description This study develops an eye tracking method for autostereoscopic three-dimensional (3D) display systems for use in various environments. The eye tracking-based autostereoscopic 3D display provides low crosstalk and high-resolution 3D image experience seamlessly without 3D eyeglasses by overcoming the viewing position restriction. However, accurate and fast eye position detection and tracking are still challenging, owing to the various light conditions, camera control, thick eyeglasses, eyeglass sunlight reflection, and limited system resources. This study presents a robust, automated algorithm and relevant systems for accurate and fast detection and tracking of eye pupil centers in 3D with a single visual camera and near-infrared (NIR) light emitting diodes (LEDs). Our proposed eye tracker consists of eye–nose detection, eye–nose shape keypoint alignment, a tracker checker, and tracking with NIR LED on/off control. Eye–nose detection generates facial subregion boxes, including the eyes and nose, which utilize an Error-Based Learning (EBL) method for the selection of the best learnt database (DB). After detection, the eye–nose shape alignment is processed by the Supervised Descent Method (SDM) with Scale-invariant Feature Transform (SIFT). The aligner is content-aware in the sense that corresponding designated aligners are applied based on image content classification, such as the various light conditions and wearing eyeglasses. The conducted experiments on real image DBs yield promising eye detection and tracking outcomes, even in the presence of challenging conditions.
format Online
Article
Text
id pubmed-7506879
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-75068792020-09-26 Content-Aware Eye Tracking for Autostereoscopic 3D Display Kang, Dongwoo Heo, Jingu Sensors (Basel) Article This study develops an eye tracking method for autostereoscopic three-dimensional (3D) display systems for use in various environments. The eye tracking-based autostereoscopic 3D display provides low crosstalk and high-resolution 3D image experience seamlessly without 3D eyeglasses by overcoming the viewing position restriction. However, accurate and fast eye position detection and tracking are still challenging, owing to the various light conditions, camera control, thick eyeglasses, eyeglass sunlight reflection, and limited system resources. This study presents a robust, automated algorithm and relevant systems for accurate and fast detection and tracking of eye pupil centers in 3D with a single visual camera and near-infrared (NIR) light emitting diodes (LEDs). Our proposed eye tracker consists of eye–nose detection, eye–nose shape keypoint alignment, a tracker checker, and tracking with NIR LED on/off control. Eye–nose detection generates facial subregion boxes, including the eyes and nose, which utilize an Error-Based Learning (EBL) method for the selection of the best learnt database (DB). After detection, the eye–nose shape alignment is processed by the Supervised Descent Method (SDM) with Scale-invariant Feature Transform (SIFT). The aligner is content-aware in the sense that corresponding designated aligners are applied based on image content classification, such as the various light conditions and wearing eyeglasses. The conducted experiments on real image DBs yield promising eye detection and tracking outcomes, even in the presence of challenging conditions. MDPI 2020-08-25 /pmc/articles/PMC7506879/ /pubmed/32854229 http://dx.doi.org/10.3390/s20174787 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Kang, Dongwoo
Heo, Jingu
Content-Aware Eye Tracking for Autostereoscopic 3D Display
title Content-Aware Eye Tracking for Autostereoscopic 3D Display
title_full Content-Aware Eye Tracking for Autostereoscopic 3D Display
title_fullStr Content-Aware Eye Tracking for Autostereoscopic 3D Display
title_full_unstemmed Content-Aware Eye Tracking for Autostereoscopic 3D Display
title_short Content-Aware Eye Tracking for Autostereoscopic 3D Display
title_sort content-aware eye tracking for autostereoscopic 3d display
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7506879/
https://www.ncbi.nlm.nih.gov/pubmed/32854229
http://dx.doi.org/10.3390/s20174787
work_keys_str_mv AT kangdongwoo contentawareeyetrackingforautostereoscopic3ddisplay
AT heojingu contentawareeyetrackingforautostereoscopic3ddisplay