Cargando…

A Real-Time Speech Separation Method Based on Camera and Microphone Array Sensors Fusion Approach

In the context of assisted human, identifying and enhancing non-stationary speech targets speech in various noise environments, such as a cocktail party, is an important issue for real-time speech separation. Previous studies mostly used microphone signal processing to perform target speech separati...

Descripción completa

Detalles Bibliográficos
Autores principales: Liu, Ching-Feng, Ciou, Wei-Siang, Chen, Peng-Ting, Du, Yi-Chun
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7349085/
https://www.ncbi.nlm.nih.gov/pubmed/32580328
http://dx.doi.org/10.3390/s20123527
_version_ 1783556981467906048
author Liu, Ching-Feng
Ciou, Wei-Siang
Chen, Peng-Ting
Du, Yi-Chun
author_facet Liu, Ching-Feng
Ciou, Wei-Siang
Chen, Peng-Ting
Du, Yi-Chun
author_sort Liu, Ching-Feng
collection PubMed
description In the context of assisted human, identifying and enhancing non-stationary speech targets speech in various noise environments, such as a cocktail party, is an important issue for real-time speech separation. Previous studies mostly used microphone signal processing to perform target speech separation and analysis, such as feature recognition through a large amount of training data and supervised machine learning. The method was suitable for stationary noise suppression, but relatively limited for non-stationary noise and difficult to meet the real-time processing requirement. In this study, we propose a real-time speech separation method based on an approach that combines an optical camera and a microphone array. The method was divided into two stages. Stage 1 used computer vision technology with the camera to detect and identify interest targets and evaluate source angles and distance. Stage 2 used beamforming technology with microphone array to enhance and separate the target speech sound. The asynchronous update function was utilized to integrate the beamforming control and speech processing to reduce the effect of the processing delay. The experimental results show that the noise reduction in various stationary and non-stationary noise environments were 6.1 dB and 5.2 dB respectively. The response time of speech processing was less than 10ms, which meets the requirements of a real-time system. The proposed method has high potential to be applied in auxiliary listening systems or machine language processing like intelligent personal assistant.
format Online
Article
Text
id pubmed-7349085
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-73490852020-07-22 A Real-Time Speech Separation Method Based on Camera and Microphone Array Sensors Fusion Approach Liu, Ching-Feng Ciou, Wei-Siang Chen, Peng-Ting Du, Yi-Chun Sensors (Basel) Article In the context of assisted human, identifying and enhancing non-stationary speech targets speech in various noise environments, such as a cocktail party, is an important issue for real-time speech separation. Previous studies mostly used microphone signal processing to perform target speech separation and analysis, such as feature recognition through a large amount of training data and supervised machine learning. The method was suitable for stationary noise suppression, but relatively limited for non-stationary noise and difficult to meet the real-time processing requirement. In this study, we propose a real-time speech separation method based on an approach that combines an optical camera and a microphone array. The method was divided into two stages. Stage 1 used computer vision technology with the camera to detect and identify interest targets and evaluate source angles and distance. Stage 2 used beamforming technology with microphone array to enhance and separate the target speech sound. The asynchronous update function was utilized to integrate the beamforming control and speech processing to reduce the effect of the processing delay. The experimental results show that the noise reduction in various stationary and non-stationary noise environments were 6.1 dB and 5.2 dB respectively. The response time of speech processing was less than 10ms, which meets the requirements of a real-time system. The proposed method has high potential to be applied in auxiliary listening systems or machine language processing like intelligent personal assistant. MDPI 2020-06-22 /pmc/articles/PMC7349085/ /pubmed/32580328 http://dx.doi.org/10.3390/s20123527 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Liu, Ching-Feng
Ciou, Wei-Siang
Chen, Peng-Ting
Du, Yi-Chun
A Real-Time Speech Separation Method Based on Camera and Microphone Array Sensors Fusion Approach
title A Real-Time Speech Separation Method Based on Camera and Microphone Array Sensors Fusion Approach
title_full A Real-Time Speech Separation Method Based on Camera and Microphone Array Sensors Fusion Approach
title_fullStr A Real-Time Speech Separation Method Based on Camera and Microphone Array Sensors Fusion Approach
title_full_unstemmed A Real-Time Speech Separation Method Based on Camera and Microphone Array Sensors Fusion Approach
title_short A Real-Time Speech Separation Method Based on Camera and Microphone Array Sensors Fusion Approach
title_sort real-time speech separation method based on camera and microphone array sensors fusion approach
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7349085/
https://www.ncbi.nlm.nih.gov/pubmed/32580328
http://dx.doi.org/10.3390/s20123527
work_keys_str_mv AT liuchingfeng arealtimespeechseparationmethodbasedoncameraandmicrophonearraysensorsfusionapproach
AT ciouweisiang arealtimespeechseparationmethodbasedoncameraandmicrophonearraysensorsfusionapproach
AT chenpengting arealtimespeechseparationmethodbasedoncameraandmicrophonearraysensorsfusionapproach
AT duyichun arealtimespeechseparationmethodbasedoncameraandmicrophonearraysensorsfusionapproach
AT liuchingfeng realtimespeechseparationmethodbasedoncameraandmicrophonearraysensorsfusionapproach
AT ciouweisiang realtimespeechseparationmethodbasedoncameraandmicrophonearraysensorsfusionapproach
AT chenpengting realtimespeechseparationmethodbasedoncameraandmicrophonearraysensorsfusionapproach
AT duyichun realtimespeechseparationmethodbasedoncameraandmicrophonearraysensorsfusionapproach