Cargando…
Intuitive Cognition-Based Method for Generating Speech Using Hand Gestures
Muteness at its various levels is a common disability. Most of the technological solutions to the problem creates vocal speech through the transition from mute languages to vocal acoustic sounds. We present a new approach for creating speech: a technology that does not require prior knowledge of sig...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8402147/ https://www.ncbi.nlm.nih.gov/pubmed/34450731 http://dx.doi.org/10.3390/s21165291 |
_version_ | 1783745718766272512 |
---|---|
author | Holdengreber, Eldad Yozevitch, Roi Khavkin, Vitali |
author_facet | Holdengreber, Eldad Yozevitch, Roi Khavkin, Vitali |
author_sort | Holdengreber, Eldad |
collection | PubMed |
description | Muteness at its various levels is a common disability. Most of the technological solutions to the problem creates vocal speech through the transition from mute languages to vocal acoustic sounds. We present a new approach for creating speech: a technology that does not require prior knowledge of sign language. This technology is based on the most basic level of speech according to the phonetic division into vowels and consonants. The speech itself is expected to be expressed through sensing of the hand movements, as the movements are divided into three rotations: yaw, pitch, and roll. The proposed algorithm converts these rotations through programming to vowels and consonants. For the hand movement sensing, we used a depth camera and standard speakers in order to produce the sounds. The combination of the programmed depth camera and the speakers, together with the cognitive activity of the brain, is integrated into a unique speech interface. Using this interface, the user can develop speech through an intuitive cognitive process in accordance with the ongoing brain activity, similar to the natural use of the vocal cords. Based on the performance of the presented speech interface prototype, it is substantiated that the proposed device could be a solution for those suffering from speech disabilities. |
format | Online Article Text |
id | pubmed-8402147 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-84021472021-08-29 Intuitive Cognition-Based Method for Generating Speech Using Hand Gestures Holdengreber, Eldad Yozevitch, Roi Khavkin, Vitali Sensors (Basel) Communication Muteness at its various levels is a common disability. Most of the technological solutions to the problem creates vocal speech through the transition from mute languages to vocal acoustic sounds. We present a new approach for creating speech: a technology that does not require prior knowledge of sign language. This technology is based on the most basic level of speech according to the phonetic division into vowels and consonants. The speech itself is expected to be expressed through sensing of the hand movements, as the movements are divided into three rotations: yaw, pitch, and roll. The proposed algorithm converts these rotations through programming to vowels and consonants. For the hand movement sensing, we used a depth camera and standard speakers in order to produce the sounds. The combination of the programmed depth camera and the speakers, together with the cognitive activity of the brain, is integrated into a unique speech interface. Using this interface, the user can develop speech through an intuitive cognitive process in accordance with the ongoing brain activity, similar to the natural use of the vocal cords. Based on the performance of the presented speech interface prototype, it is substantiated that the proposed device could be a solution for those suffering from speech disabilities. MDPI 2021-08-05 /pmc/articles/PMC8402147/ /pubmed/34450731 http://dx.doi.org/10.3390/s21165291 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Communication Holdengreber, Eldad Yozevitch, Roi Khavkin, Vitali Intuitive Cognition-Based Method for Generating Speech Using Hand Gestures |
title | Intuitive Cognition-Based Method for Generating Speech Using Hand Gestures |
title_full | Intuitive Cognition-Based Method for Generating Speech Using Hand Gestures |
title_fullStr | Intuitive Cognition-Based Method for Generating Speech Using Hand Gestures |
title_full_unstemmed | Intuitive Cognition-Based Method for Generating Speech Using Hand Gestures |
title_short | Intuitive Cognition-Based Method for Generating Speech Using Hand Gestures |
title_sort | intuitive cognition-based method for generating speech using hand gestures |
topic | Communication |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8402147/ https://www.ncbi.nlm.nih.gov/pubmed/34450731 http://dx.doi.org/10.3390/s21165291 |
work_keys_str_mv | AT holdengrebereldad intuitivecognitionbasedmethodforgeneratingspeechusinghandgestures AT yozevitchroi intuitivecognitionbasedmethodforgeneratingspeechusinghandgestures AT khavkinvitali intuitivecognitionbasedmethodforgeneratingspeechusinghandgestures |