Cargando…
The classification of flash visual evoked potential based on deep learning
BACKGROUND: Visual electrophysiology is an objective visual function examination widely used in clinical work and medical identification that can objectively evaluate visual function and locate lesions according to waveform changes. However, in visual electrophysiological examinations, the flash vis...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9851116/ https://www.ncbi.nlm.nih.gov/pubmed/36658545 http://dx.doi.org/10.1186/s12911-023-02107-5 |
_version_ | 1784872341305556992 |
---|---|
author | Liang, Na Wang, Chengliang Li, Shiying Xie, Xin Lin, Jun Zhong, Wen |
author_facet | Liang, Na Wang, Chengliang Li, Shiying Xie, Xin Lin, Jun Zhong, Wen |
author_sort | Liang, Na |
collection | PubMed |
description | BACKGROUND: Visual electrophysiology is an objective visual function examination widely used in clinical work and medical identification that can objectively evaluate visual function and locate lesions according to waveform changes. However, in visual electrophysiological examinations, the flash visual evoked potential (FVEP) varies greatly among individuals, resulting in different waveforms in different normal subjects. Moreover, most of the FVEP wave labelling is performed automatically by a machine, and manually corrected by professional clinical technicians. These labels may have biases due to the individual variations in subjects, incomplete clinical examination data, different professional skills, personal habits and other factors. Through the retrospective study of big data, an artificial intelligence algorithm is used to maintain high generalization abilities in complex situations and improve the accuracy of prescreening. METHODS: A novel multi-input neural network based on convolution and confidence branching (MCAC-Net) for retinitis pigmentosa RP recognition and out-of-distribution detection is proposed. The MCAC-Net with global and local feature extraction is designed for the FVEP signal that has different local and global information, and a confidence branch is added for out-of-distribution sample detection. For the proposed manual features,a new input layer is added. RESULTS: The model is verified by a clinically collected FVEP dataset, and an accuracy of 90.7% is achieved in the classification task and 93.3% in the out-of-distribution detection task. CONCLUSION: We built a deep learning-based FVEP classification algorithm that promises to be an excellent tool for screening RP diseases by using FVEP signals. |
format | Online Article Text |
id | pubmed-9851116 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-98511162023-01-20 The classification of flash visual evoked potential based on deep learning Liang, Na Wang, Chengliang Li, Shiying Xie, Xin Lin, Jun Zhong, Wen BMC Med Inform Decis Mak Research BACKGROUND: Visual electrophysiology is an objective visual function examination widely used in clinical work and medical identification that can objectively evaluate visual function and locate lesions according to waveform changes. However, in visual electrophysiological examinations, the flash visual evoked potential (FVEP) varies greatly among individuals, resulting in different waveforms in different normal subjects. Moreover, most of the FVEP wave labelling is performed automatically by a machine, and manually corrected by professional clinical technicians. These labels may have biases due to the individual variations in subjects, incomplete clinical examination data, different professional skills, personal habits and other factors. Through the retrospective study of big data, an artificial intelligence algorithm is used to maintain high generalization abilities in complex situations and improve the accuracy of prescreening. METHODS: A novel multi-input neural network based on convolution and confidence branching (MCAC-Net) for retinitis pigmentosa RP recognition and out-of-distribution detection is proposed. The MCAC-Net with global and local feature extraction is designed for the FVEP signal that has different local and global information, and a confidence branch is added for out-of-distribution sample detection. For the proposed manual features,a new input layer is added. RESULTS: The model is verified by a clinically collected FVEP dataset, and an accuracy of 90.7% is achieved in the classification task and 93.3% in the out-of-distribution detection task. CONCLUSION: We built a deep learning-based FVEP classification algorithm that promises to be an excellent tool for screening RP diseases by using FVEP signals. BioMed Central 2023-01-19 /pmc/articles/PMC9851116/ /pubmed/36658545 http://dx.doi.org/10.1186/s12911-023-02107-5 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data. |
spellingShingle | Research Liang, Na Wang, Chengliang Li, Shiying Xie, Xin Lin, Jun Zhong, Wen The classification of flash visual evoked potential based on deep learning |
title | The classification of flash visual evoked potential based on deep learning |
title_full | The classification of flash visual evoked potential based on deep learning |
title_fullStr | The classification of flash visual evoked potential based on deep learning |
title_full_unstemmed | The classification of flash visual evoked potential based on deep learning |
title_short | The classification of flash visual evoked potential based on deep learning |
title_sort | classification of flash visual evoked potential based on deep learning |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9851116/ https://www.ncbi.nlm.nih.gov/pubmed/36658545 http://dx.doi.org/10.1186/s12911-023-02107-5 |
work_keys_str_mv | AT liangna theclassificationofflashvisualevokedpotentialbasedondeeplearning AT wangchengliang theclassificationofflashvisualevokedpotentialbasedondeeplearning AT lishiying theclassificationofflashvisualevokedpotentialbasedondeeplearning AT xiexin theclassificationofflashvisualevokedpotentialbasedondeeplearning AT linjun theclassificationofflashvisualevokedpotentialbasedondeeplearning AT zhongwen theclassificationofflashvisualevokedpotentialbasedondeeplearning AT liangna classificationofflashvisualevokedpotentialbasedondeeplearning AT wangchengliang classificationofflashvisualevokedpotentialbasedondeeplearning AT lishiying classificationofflashvisualevokedpotentialbasedondeeplearning AT xiexin classificationofflashvisualevokedpotentialbasedondeeplearning AT linjun classificationofflashvisualevokedpotentialbasedondeeplearning AT zhongwen classificationofflashvisualevokedpotentialbasedondeeplearning |