Cargando…
A Joint Multitask Learning Model for Cross-sectional and Longitudinal Predictions of Visual Field Using OCT
PURPOSE: We constructed a multitask learning model (latent space linear regression and deep learning [LSLR-DL]) in which the 2 tasks of cross-sectional predictions (using OCT) of visual field (VF; central 10°) and longitudinal progression predictions of VF (30°) were performed jointly via sharing th...
Autores principales: | , , , , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Elsevier
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9560642/ https://www.ncbi.nlm.nih.gov/pubmed/36246943 http://dx.doi.org/10.1016/j.xops.2021.100055 |
_version_ | 1784807797149401088 |
---|---|
author | Asaoka, Ryo Xu, Linchuan Murata, Hiroshi Kiwaki, Taichi Matsuura, Masato Fujino, Yuri Tanito, Masaki Mori, Kazuhiko Ikeda, Yoko Kanamoto, Takashi Inoue, Kenji Yamagami, Jukichi Yamanishi, Kenji |
author_facet | Asaoka, Ryo Xu, Linchuan Murata, Hiroshi Kiwaki, Taichi Matsuura, Masato Fujino, Yuri Tanito, Masaki Mori, Kazuhiko Ikeda, Yoko Kanamoto, Takashi Inoue, Kenji Yamagami, Jukichi Yamanishi, Kenji |
author_sort | Asaoka, Ryo |
collection | PubMed |
description | PURPOSE: We constructed a multitask learning model (latent space linear regression and deep learning [LSLR-DL]) in which the 2 tasks of cross-sectional predictions (using OCT) of visual field (VF; central 10°) and longitudinal progression predictions of VF (30°) were performed jointly via sharing the deep learning (DL) component such that information from both tasks was used in an auxiliary manner (The Association for Computing Machinery's Special Interest Group on Knowledge Discovery and Data Mining [SIGKDD] 2021). The purpose of the current study was to investigate the prediction accuracy preparing an independent validation dataset. DESIGN: Cohort study. PARTICIPANTS: Cross-sectional training and testing data sets included the VF (Humphrey Field Analyzer [HFA] 10-2 test) and an OCT measurement (obtained within 6 months) from 591 eyes of 351 healthy people or patients with open-angle glaucoma (OAG) and from 155 eyes of 131 patients with OAG, respectively. Longitudinal training and testing data sets included 7984 VF results (HFA 24-2 test) from 998 eyes of 592 patients with OAG and 1184 VF results (HFA 24-2 test) from 148 eyes of 84 patients with OAG, respectively. Each eye had 8 VF test results (HFA 24-2 test). The OCT sequences within the observation period were used. METHODS: Root mean square error (RMSE) was used to evaluate the accuracy of LSLR-DL for the cross-sectional prediction of VF (HFA 10-2 test). For the longitudinal prediction, the final (eighth) VF test (HFA 24-2 test) was predicted using a shorter VF series and relevant OCT images, and the RMSE was calculated. For comparison, RMSE values were calculated by applying the DL component (cross-sectional prediction) and the ordinary pointwise linear regression (longitudinal prediction). MAIN OUTCOME MEASURES: Root mean square error in the cross-sectional and longitudinal predictions. RESULTS: Using LSLR-DL, the mean RMSE in the cross-sectional prediction was 6.4 dB and was between 4.4 dB (VF tests 1 and 2) and 3.7 dB (VF tests 1–7) in the longitudinal prediction, indicating that LSLR-DL significantly outperformed other methods. CONCLUSIONS: The results of this study indicate that LSLR-DL is useful for both the cross-sectional prediction of VF (HFA 10-2 test) and the longitudinal progression prediction of VF (HFA 24-2 test). |
format | Online Article Text |
id | pubmed-9560642 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Elsevier |
record_format | MEDLINE/PubMed |
spelling | pubmed-95606422022-10-14 A Joint Multitask Learning Model for Cross-sectional and Longitudinal Predictions of Visual Field Using OCT Asaoka, Ryo Xu, Linchuan Murata, Hiroshi Kiwaki, Taichi Matsuura, Masato Fujino, Yuri Tanito, Masaki Mori, Kazuhiko Ikeda, Yoko Kanamoto, Takashi Inoue, Kenji Yamagami, Jukichi Yamanishi, Kenji Ophthalmol Sci Original Article PURPOSE: We constructed a multitask learning model (latent space linear regression and deep learning [LSLR-DL]) in which the 2 tasks of cross-sectional predictions (using OCT) of visual field (VF; central 10°) and longitudinal progression predictions of VF (30°) were performed jointly via sharing the deep learning (DL) component such that information from both tasks was used in an auxiliary manner (The Association for Computing Machinery's Special Interest Group on Knowledge Discovery and Data Mining [SIGKDD] 2021). The purpose of the current study was to investigate the prediction accuracy preparing an independent validation dataset. DESIGN: Cohort study. PARTICIPANTS: Cross-sectional training and testing data sets included the VF (Humphrey Field Analyzer [HFA] 10-2 test) and an OCT measurement (obtained within 6 months) from 591 eyes of 351 healthy people or patients with open-angle glaucoma (OAG) and from 155 eyes of 131 patients with OAG, respectively. Longitudinal training and testing data sets included 7984 VF results (HFA 24-2 test) from 998 eyes of 592 patients with OAG and 1184 VF results (HFA 24-2 test) from 148 eyes of 84 patients with OAG, respectively. Each eye had 8 VF test results (HFA 24-2 test). The OCT sequences within the observation period were used. METHODS: Root mean square error (RMSE) was used to evaluate the accuracy of LSLR-DL for the cross-sectional prediction of VF (HFA 10-2 test). For the longitudinal prediction, the final (eighth) VF test (HFA 24-2 test) was predicted using a shorter VF series and relevant OCT images, and the RMSE was calculated. For comparison, RMSE values were calculated by applying the DL component (cross-sectional prediction) and the ordinary pointwise linear regression (longitudinal prediction). MAIN OUTCOME MEASURES: Root mean square error in the cross-sectional and longitudinal predictions. RESULTS: Using LSLR-DL, the mean RMSE in the cross-sectional prediction was 6.4 dB and was between 4.4 dB (VF tests 1 and 2) and 3.7 dB (VF tests 1–7) in the longitudinal prediction, indicating that LSLR-DL significantly outperformed other methods. CONCLUSIONS: The results of this study indicate that LSLR-DL is useful for both the cross-sectional prediction of VF (HFA 10-2 test) and the longitudinal progression prediction of VF (HFA 24-2 test). Elsevier 2021-09-07 /pmc/articles/PMC9560642/ /pubmed/36246943 http://dx.doi.org/10.1016/j.xops.2021.100055 Text en © 2021 by the American Academy of Ophthalmology. https://creativecommons.org/licenses/by-nc-nd/4.0/This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). |
spellingShingle | Original Article Asaoka, Ryo Xu, Linchuan Murata, Hiroshi Kiwaki, Taichi Matsuura, Masato Fujino, Yuri Tanito, Masaki Mori, Kazuhiko Ikeda, Yoko Kanamoto, Takashi Inoue, Kenji Yamagami, Jukichi Yamanishi, Kenji A Joint Multitask Learning Model for Cross-sectional and Longitudinal Predictions of Visual Field Using OCT |
title | A Joint Multitask Learning Model for Cross-sectional and Longitudinal Predictions of Visual Field Using OCT |
title_full | A Joint Multitask Learning Model for Cross-sectional and Longitudinal Predictions of Visual Field Using OCT |
title_fullStr | A Joint Multitask Learning Model for Cross-sectional and Longitudinal Predictions of Visual Field Using OCT |
title_full_unstemmed | A Joint Multitask Learning Model for Cross-sectional and Longitudinal Predictions of Visual Field Using OCT |
title_short | A Joint Multitask Learning Model for Cross-sectional and Longitudinal Predictions of Visual Field Using OCT |
title_sort | joint multitask learning model for cross-sectional and longitudinal predictions of visual field using oct |
topic | Original Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9560642/ https://www.ncbi.nlm.nih.gov/pubmed/36246943 http://dx.doi.org/10.1016/j.xops.2021.100055 |
work_keys_str_mv | AT asaokaryo ajointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct AT xulinchuan ajointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct AT muratahiroshi ajointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct AT kiwakitaichi ajointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct AT matsuuramasato ajointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct AT fujinoyuri ajointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct AT tanitomasaki ajointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct AT morikazuhiko ajointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct AT ikedayoko ajointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct AT kanamototakashi ajointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct AT inouekenji ajointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct AT yamagamijukichi ajointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct AT yamanishikenji ajointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct AT asaokaryo jointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct AT xulinchuan jointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct AT muratahiroshi jointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct AT kiwakitaichi jointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct AT matsuuramasato jointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct AT fujinoyuri jointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct AT tanitomasaki jointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct AT morikazuhiko jointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct AT ikedayoko jointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct AT kanamototakashi jointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct AT inouekenji jointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct AT yamagamijukichi jointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct AT yamanishikenji jointmultitasklearningmodelforcrosssectionalandlongitudinalpredictionsofvisualfieldusingoct |