Cargando…

Deep-Learning-Based Character Recognition from Handwriting Motion Data Captured Using IMU and Force Sensors

Digitizing handwriting is mostly performed using either image-based methods, such as optical character recognition, or utilizing two or more devices, such as a special stylus and a smart pad. The high-cost nature of this approach necessitates a cheaper and standalone smart pen. Therefore, in this pa...

Descripción completa

Detalles Bibliográficos
Autores principales: Alemayoh, Tsige Tadesse, Shintani, Masaaki, Lee, Jae Hoon, Okamoto, Shingo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9612168/
https://www.ncbi.nlm.nih.gov/pubmed/36298192
http://dx.doi.org/10.3390/s22207840
_version_ 1784819712556793856
author Alemayoh, Tsige Tadesse
Shintani, Masaaki
Lee, Jae Hoon
Okamoto, Shingo
author_facet Alemayoh, Tsige Tadesse
Shintani, Masaaki
Lee, Jae Hoon
Okamoto, Shingo
author_sort Alemayoh, Tsige Tadesse
collection PubMed
description Digitizing handwriting is mostly performed using either image-based methods, such as optical character recognition, or utilizing two or more devices, such as a special stylus and a smart pad. The high-cost nature of this approach necessitates a cheaper and standalone smart pen. Therefore, in this paper, a deep-learning-based compact smart digital pen that recognizes 36 alphanumeric characters was developed. Unlike common methods, which employ only inertial data, handwriting recognition is achieved from hand motion data captured using an inertial force sensor. The developed prototype smart pen comprises an ordinary ballpoint ink chamber, three force sensors, a six-channel inertial sensor, a microcomputer, and a plastic barrel structure. Handwritten data of the characters were recorded from six volunteers. After the data was properly trimmed and restructured, it was used to train four neural networks using deep-learning methods. These included Vision transformer (ViT), DNN (deep neural network), CNN (convolutional neural network), and LSTM (long short-term memory). The ViT network outperformed the others to achieve a validation accuracy of 99.05%. The trained model was further validated in real-time where it showed promising performance. These results will be used as a foundation to extend this investigation to include more characters and subjects.
format Online
Article
Text
id pubmed-9612168
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-96121682022-10-28 Deep-Learning-Based Character Recognition from Handwriting Motion Data Captured Using IMU and Force Sensors Alemayoh, Tsige Tadesse Shintani, Masaaki Lee, Jae Hoon Okamoto, Shingo Sensors (Basel) Article Digitizing handwriting is mostly performed using either image-based methods, such as optical character recognition, or utilizing two or more devices, such as a special stylus and a smart pad. The high-cost nature of this approach necessitates a cheaper and standalone smart pen. Therefore, in this paper, a deep-learning-based compact smart digital pen that recognizes 36 alphanumeric characters was developed. Unlike common methods, which employ only inertial data, handwriting recognition is achieved from hand motion data captured using an inertial force sensor. The developed prototype smart pen comprises an ordinary ballpoint ink chamber, three force sensors, a six-channel inertial sensor, a microcomputer, and a plastic barrel structure. Handwritten data of the characters were recorded from six volunteers. After the data was properly trimmed and restructured, it was used to train four neural networks using deep-learning methods. These included Vision transformer (ViT), DNN (deep neural network), CNN (convolutional neural network), and LSTM (long short-term memory). The ViT network outperformed the others to achieve a validation accuracy of 99.05%. The trained model was further validated in real-time where it showed promising performance. These results will be used as a foundation to extend this investigation to include more characters and subjects. MDPI 2022-10-15 /pmc/articles/PMC9612168/ /pubmed/36298192 http://dx.doi.org/10.3390/s22207840 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Alemayoh, Tsige Tadesse
Shintani, Masaaki
Lee, Jae Hoon
Okamoto, Shingo
Deep-Learning-Based Character Recognition from Handwriting Motion Data Captured Using IMU and Force Sensors
title Deep-Learning-Based Character Recognition from Handwriting Motion Data Captured Using IMU and Force Sensors
title_full Deep-Learning-Based Character Recognition from Handwriting Motion Data Captured Using IMU and Force Sensors
title_fullStr Deep-Learning-Based Character Recognition from Handwriting Motion Data Captured Using IMU and Force Sensors
title_full_unstemmed Deep-Learning-Based Character Recognition from Handwriting Motion Data Captured Using IMU and Force Sensors
title_short Deep-Learning-Based Character Recognition from Handwriting Motion Data Captured Using IMU and Force Sensors
title_sort deep-learning-based character recognition from handwriting motion data captured using imu and force sensors
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9612168/
https://www.ncbi.nlm.nih.gov/pubmed/36298192
http://dx.doi.org/10.3390/s22207840
work_keys_str_mv AT alemayohtsigetadesse deeplearningbasedcharacterrecognitionfromhandwritingmotiondatacapturedusingimuandforcesensors
AT shintanimasaaki deeplearningbasedcharacterrecognitionfromhandwritingmotiondatacapturedusingimuandforcesensors
AT leejaehoon deeplearningbasedcharacterrecognitionfromhandwritingmotiondatacapturedusingimuandforcesensors
AT okamotoshingo deeplearningbasedcharacterrecognitionfromhandwritingmotiondatacapturedusingimuandforcesensors