Cargando…

A novel Bayesian adaptive method for mapping the visual field

Measuring visual functions such as light and contrast sensitivity, visual acuity, reading speed, and crowding across retinal locations provides visual-field maps (VFMs) that are extremely valuable for detecting and managing eye diseases. Although mapping light sensitivity is a standard glaucoma test...

Descripción completa

Detalles Bibliográficos
Autores principales: Xu, Pengjing, Lesmes, Luis Andres, Yu, Deyue, Lu, Zhong-Lin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: The Association for Research in Vision and Ophthalmology 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6917184/
https://www.ncbi.nlm.nih.gov/pubmed/31845976
http://dx.doi.org/10.1167/19.14.16
_version_ 1783480364579160064
author Xu, Pengjing
Lesmes, Luis Andres
Yu, Deyue
Lu, Zhong-Lin
author_facet Xu, Pengjing
Lesmes, Luis Andres
Yu, Deyue
Lu, Zhong-Lin
author_sort Xu, Pengjing
collection PubMed
description Measuring visual functions such as light and contrast sensitivity, visual acuity, reading speed, and crowding across retinal locations provides visual-field maps (VFMs) that are extremely valuable for detecting and managing eye diseases. Although mapping light sensitivity is a standard glaucoma test, the measurement is often noisy (Keltner et al., 2000). Mapping other visual functions is even more challenging. To improve the precision of light-sensitivity mapping and enable other VFM assessments, we developed a novel hybrid Bayesian adaptive testing framework, the qVFM method. The method combines a global module for preliminary assessment of the VFM's shape and a local module for assessing individual visual-field locations. This study validates the qVFM method in measuring light sensitivity across the visual field. In both simulation and psychophysics studies, we sampled 100 visual-field locations (60° × 60°) and compared the performance of qVFM with the qYN procedure (Lesmes et al., 2015) that measured light sensitivity at each location independently. In the simulations, a simulated observer was tested monocularly for 1,000 runs with 1,200 trials/run, to compare the accuracy and precision of the two methods. In the experiments, data were collected from 12 eyes (six left, six right) of six human subjects. Subjects were cued to report the presence or absence of a target stimulus, with the luminance and location of the target adaptively selected in each trial. Both simulations and a psychological experiment showed that the qVFM method can provide accurate, precise, and efficient mapping of light sensitivity. This method can be extended to map other visual functions, with potential clinical signals for monitoring vision loss, evaluating therapeutic interventions, and developing effective rehabilitation for low vision.
format Online
Article
Text
id pubmed-6917184
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher The Association for Research in Vision and Ophthalmology
record_format MEDLINE/PubMed
spelling pubmed-69171842019-12-30 A novel Bayesian adaptive method for mapping the visual field Xu, Pengjing Lesmes, Luis Andres Yu, Deyue Lu, Zhong-Lin J Vis Methods Measuring visual functions such as light and contrast sensitivity, visual acuity, reading speed, and crowding across retinal locations provides visual-field maps (VFMs) that are extremely valuable for detecting and managing eye diseases. Although mapping light sensitivity is a standard glaucoma test, the measurement is often noisy (Keltner et al., 2000). Mapping other visual functions is even more challenging. To improve the precision of light-sensitivity mapping and enable other VFM assessments, we developed a novel hybrid Bayesian adaptive testing framework, the qVFM method. The method combines a global module for preliminary assessment of the VFM's shape and a local module for assessing individual visual-field locations. This study validates the qVFM method in measuring light sensitivity across the visual field. In both simulation and psychophysics studies, we sampled 100 visual-field locations (60° × 60°) and compared the performance of qVFM with the qYN procedure (Lesmes et al., 2015) that measured light sensitivity at each location independently. In the simulations, a simulated observer was tested monocularly for 1,000 runs with 1,200 trials/run, to compare the accuracy and precision of the two methods. In the experiments, data were collected from 12 eyes (six left, six right) of six human subjects. Subjects were cued to report the presence or absence of a target stimulus, with the luminance and location of the target adaptively selected in each trial. Both simulations and a psychological experiment showed that the qVFM method can provide accurate, precise, and efficient mapping of light sensitivity. This method can be extended to map other visual functions, with potential clinical signals for monitoring vision loss, evaluating therapeutic interventions, and developing effective rehabilitation for low vision. The Association for Research in Vision and Ophthalmology 2019-12-17 /pmc/articles/PMC6917184/ /pubmed/31845976 http://dx.doi.org/10.1167/19.14.16 Text en Copyright 2019 The Authors http://creativecommons.org/licenses/by-nc-nd/4.0/ This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
spellingShingle Methods
Xu, Pengjing
Lesmes, Luis Andres
Yu, Deyue
Lu, Zhong-Lin
A novel Bayesian adaptive method for mapping the visual field
title A novel Bayesian adaptive method for mapping the visual field
title_full A novel Bayesian adaptive method for mapping the visual field
title_fullStr A novel Bayesian adaptive method for mapping the visual field
title_full_unstemmed A novel Bayesian adaptive method for mapping the visual field
title_short A novel Bayesian adaptive method for mapping the visual field
title_sort novel bayesian adaptive method for mapping the visual field
topic Methods
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6917184/
https://www.ncbi.nlm.nih.gov/pubmed/31845976
http://dx.doi.org/10.1167/19.14.16
work_keys_str_mv AT xupengjing anovelbayesianadaptivemethodformappingthevisualfield
AT lesmesluisandres anovelbayesianadaptivemethodformappingthevisualfield
AT yudeyue anovelbayesianadaptivemethodformappingthevisualfield
AT luzhonglin anovelbayesianadaptivemethodformappingthevisualfield
AT xupengjing novelbayesianadaptivemethodformappingthevisualfield
AT lesmesluisandres novelbayesianadaptivemethodformappingthevisualfield
AT yudeyue novelbayesianadaptivemethodformappingthevisualfield
AT luzhonglin novelbayesianadaptivemethodformappingthevisualfield