Cargando…

Interobserver and Intertest Agreement in Telemedicine Glaucoma Screening with Optic Disk Photos and Optical Coherence Tomography

Purpose: To evaluate interobserver and intertest agreement between optical coherence tomography (OCT) and retinography in the detection of glaucoma through a telemedicine program. Methods: A stratified sample of 4113 individuals was randomly selected, and those who accepted underwent examination inc...

Descripción completa

Detalles Bibliográficos
Autores principales: Anton, Alfonso, Nolivos, Karen, Pazos, Marta, Fatti, Gianluca, Herranz, Alejandra, Ayala-Fuentes, Miriam Eleonora, Martínez-Prats, Elena, Peral, Oscar, Vega-Lopez, Zaida, Monleon-Getino, Antoni, Morilla-Grasa, Antonio, Comas, Merce, Castells, Xavier
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8347319/
https://www.ncbi.nlm.nih.gov/pubmed/34362120
http://dx.doi.org/10.3390/jcm10153337
Descripción
Sumario:Purpose: To evaluate interobserver and intertest agreement between optical coherence tomography (OCT) and retinography in the detection of glaucoma through a telemedicine program. Methods: A stratified sample of 4113 individuals was randomly selected, and those who accepted underwent examination including visual acuity, intraocular pressure (IOP), non-mydriatic retinography, and imaging using a portable OCT device. Participants’ data and images were uploaded and assessed by 16 ophthalmologists on a deferred basis. Two independent evaluations were performed for all participants. Agreement between methods was assessed using the kappa coefficient and the prevalence-adjusted bias-adjusted kappa (PABAK). We analyzed potential factors possibly influencing the level of agreement. Results: The final sample comprised 1006 participants. Of all suspected glaucoma cases (n = 201), 20.4% were identified in retinographs only, 11.9% in OCT images only, 46.3% in both, and 21.4% were diagnosed based on other data. Overall interobserver agreement outcomes were moderate to good with a kappa coefficient of 0.37 and a PABAK index of 0.58. Higher values were obtained by experienced evaluators (kappa = 0.61; PABAK = 0.82). Kappa and PABAK values between OCT and photographs were 0.52 and 0.82 for the first evaluation. Conclusion: In a telemedicine screening setting, interobserver agreement on diagnosis was moderate but improved with greater evaluator expertise.