Cargando…

Application of Deep Learning System into the Development of Communication Device for Quadriplegic Patient

OBJECTIVE: In general, quadriplegic patients use their voices to call the caregiver. However, severe quadriplegic patients are in a state of tracheostomy, and cannot generate a voice. These patients require other communication tools to call caregivers. Recently, monitoring of eye status using artifi...

Descripción completa

Detalles Bibliográficos
Autores principales: Lee, Jung Hwan, Kang, Taewoo, Choi, Byung Kwan, Han, In Ho, Kim, Byung Chul, Ro, Jung Hoon
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Korean Neurotraumatology Society 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6826084/
https://www.ncbi.nlm.nih.gov/pubmed/31720261
http://dx.doi.org/10.13004/kjnt.2019.15.e17
_version_ 1783465009203904512
author Lee, Jung Hwan
Kang, Taewoo
Choi, Byung Kwan
Han, In Ho
Kim, Byung Chul
Ro, Jung Hoon
author_facet Lee, Jung Hwan
Kang, Taewoo
Choi, Byung Kwan
Han, In Ho
Kim, Byung Chul
Ro, Jung Hoon
author_sort Lee, Jung Hwan
collection PubMed
description OBJECTIVE: In general, quadriplegic patients use their voices to call the caregiver. However, severe quadriplegic patients are in a state of tracheostomy, and cannot generate a voice. These patients require other communication tools to call caregivers. Recently, monitoring of eye status using artificial intelligence (AI) has been widely used in various fields. We made eye status monitoring system using deep learning, and developed a communication system for quadriplegic patients can call the caregiver. METHODS: The communication system consists of 3 programs. The first program was developed for automatic capturing of eye images from the face using a webcam. It continuously captured and stored 15 eye images per second. Secondly, the captured eye images were evaluated for open or closed status by deep learning, which is a type of AI. Google TensorFlow was used as a machine learning tool or library for convolutional neural network. A total of 18,000 images were used to train deep learning system. Finally, the program was developed to utter a sound when the left eye was closed for 3 seconds. RESULTS: The test accuracy of eye status was 98.7%. In practice, when the quadriplegic patient looked at the webcam and closed his left eye for 3 seconds, the sound for calling a caregiver was generated. CONCLUSION: Our eye status detection software using AI is very accurate, and the calling system for the quadriplegic patient was satisfactory.
format Online
Article
Text
id pubmed-6826084
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher Korean Neurotraumatology Society
record_format MEDLINE/PubMed
spelling pubmed-68260842019-11-12 Application of Deep Learning System into the Development of Communication Device for Quadriplegic Patient Lee, Jung Hwan Kang, Taewoo Choi, Byung Kwan Han, In Ho Kim, Byung Chul Ro, Jung Hoon Korean J Neurotrauma Clinical Article OBJECTIVE: In general, quadriplegic patients use their voices to call the caregiver. However, severe quadriplegic patients are in a state of tracheostomy, and cannot generate a voice. These patients require other communication tools to call caregivers. Recently, monitoring of eye status using artificial intelligence (AI) has been widely used in various fields. We made eye status monitoring system using deep learning, and developed a communication system for quadriplegic patients can call the caregiver. METHODS: The communication system consists of 3 programs. The first program was developed for automatic capturing of eye images from the face using a webcam. It continuously captured and stored 15 eye images per second. Secondly, the captured eye images were evaluated for open or closed status by deep learning, which is a type of AI. Google TensorFlow was used as a machine learning tool or library for convolutional neural network. A total of 18,000 images were used to train deep learning system. Finally, the program was developed to utter a sound when the left eye was closed for 3 seconds. RESULTS: The test accuracy of eye status was 98.7%. In practice, when the quadriplegic patient looked at the webcam and closed his left eye for 3 seconds, the sound for calling a caregiver was generated. CONCLUSION: Our eye status detection software using AI is very accurate, and the calling system for the quadriplegic patient was satisfactory. Korean Neurotraumatology Society 2019-08-14 /pmc/articles/PMC6826084/ /pubmed/31720261 http://dx.doi.org/10.13004/kjnt.2019.15.e17 Text en Copyright © 2019 Korean Neurotraumatology Society https://creativecommons.org/licenses/by-nc/4.0/ This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (https://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Clinical Article
Lee, Jung Hwan
Kang, Taewoo
Choi, Byung Kwan
Han, In Ho
Kim, Byung Chul
Ro, Jung Hoon
Application of Deep Learning System into the Development of Communication Device for Quadriplegic Patient
title Application of Deep Learning System into the Development of Communication Device for Quadriplegic Patient
title_full Application of Deep Learning System into the Development of Communication Device for Quadriplegic Patient
title_fullStr Application of Deep Learning System into the Development of Communication Device for Quadriplegic Patient
title_full_unstemmed Application of Deep Learning System into the Development of Communication Device for Quadriplegic Patient
title_short Application of Deep Learning System into the Development of Communication Device for Quadriplegic Patient
title_sort application of deep learning system into the development of communication device for quadriplegic patient
topic Clinical Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6826084/
https://www.ncbi.nlm.nih.gov/pubmed/31720261
http://dx.doi.org/10.13004/kjnt.2019.15.e17
work_keys_str_mv AT leejunghwan applicationofdeeplearningsystemintothedevelopmentofcommunicationdeviceforquadriplegicpatient
AT kangtaewoo applicationofdeeplearningsystemintothedevelopmentofcommunicationdeviceforquadriplegicpatient
AT choibyungkwan applicationofdeeplearningsystemintothedevelopmentofcommunicationdeviceforquadriplegicpatient
AT haninho applicationofdeeplearningsystemintothedevelopmentofcommunicationdeviceforquadriplegicpatient
AT kimbyungchul applicationofdeeplearningsystemintothedevelopmentofcommunicationdeviceforquadriplegicpatient
AT rojunghoon applicationofdeeplearningsystemintothedevelopmentofcommunicationdeviceforquadriplegicpatient