Cargando…

VSUGAN unify voice style based on spectrogram and generated adversarial networks

In course recording, the audio recorded in different pickups and environments can be clearly distinguished and cause style differences after splicing, which influences the quality of recorded courses. A common way to improve the above situation is to use voice style unification. In the present study...

Descripción completa

Detalles Bibliográficos
Autores principales: Ouyang, Tongjie, Yang, Zhijun, Xie, Huilong, Hu, Tianlin, Liu, Qingmei
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8692613/
https://www.ncbi.nlm.nih.gov/pubmed/34934100
http://dx.doi.org/10.1038/s41598-021-03770-2
_version_ 1784618975514066944
author Ouyang, Tongjie
Yang, Zhijun
Xie, Huilong
Hu, Tianlin
Liu, Qingmei
author_facet Ouyang, Tongjie
Yang, Zhijun
Xie, Huilong
Hu, Tianlin
Liu, Qingmei
author_sort Ouyang, Tongjie
collection PubMed
description In course recording, the audio recorded in different pickups and environments can be clearly distinguished and cause style differences after splicing, which influences the quality of recorded courses. A common way to improve the above situation is to use voice style unification. In the present study, we propose a voice style unification model based on generated adversarial networks (VSUGAN) to transfer voice style from the spectrogram. The VSUGAN synthesizes the audio by combining the style information from the audio style template and the voice information from the processed audio. And it allows the audio style unification in different environments without retraining the network for new speakers. Meanwhile, the current VSUGAN is implemented and evaluated on THCHS-30 and VCTK-Corpus corpora. The source code of VSUGAN is available at https://github.com/oy-tj/VSUGAN. In one word, it is demonstrated that the VSUGAN can effectively improve the quality of the recorded audio and reduce the style differences in kinds of environments.
format Online
Article
Text
id pubmed-8692613
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-86926132021-12-28 VSUGAN unify voice style based on spectrogram and generated adversarial networks Ouyang, Tongjie Yang, Zhijun Xie, Huilong Hu, Tianlin Liu, Qingmei Sci Rep Article In course recording, the audio recorded in different pickups and environments can be clearly distinguished and cause style differences after splicing, which influences the quality of recorded courses. A common way to improve the above situation is to use voice style unification. In the present study, we propose a voice style unification model based on generated adversarial networks (VSUGAN) to transfer voice style from the spectrogram. The VSUGAN synthesizes the audio by combining the style information from the audio style template and the voice information from the processed audio. And it allows the audio style unification in different environments without retraining the network for new speakers. Meanwhile, the current VSUGAN is implemented and evaluated on THCHS-30 and VCTK-Corpus corpora. The source code of VSUGAN is available at https://github.com/oy-tj/VSUGAN. In one word, it is demonstrated that the VSUGAN can effectively improve the quality of the recorded audio and reduce the style differences in kinds of environments. Nature Publishing Group UK 2021-12-21 /pmc/articles/PMC8692613/ /pubmed/34934100 http://dx.doi.org/10.1038/s41598-021-03770-2 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Ouyang, Tongjie
Yang, Zhijun
Xie, Huilong
Hu, Tianlin
Liu, Qingmei
VSUGAN unify voice style based on spectrogram and generated adversarial networks
title VSUGAN unify voice style based on spectrogram and generated adversarial networks
title_full VSUGAN unify voice style based on spectrogram and generated adversarial networks
title_fullStr VSUGAN unify voice style based on spectrogram and generated adversarial networks
title_full_unstemmed VSUGAN unify voice style based on spectrogram and generated adversarial networks
title_short VSUGAN unify voice style based on spectrogram and generated adversarial networks
title_sort vsugan unify voice style based on spectrogram and generated adversarial networks
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8692613/
https://www.ncbi.nlm.nih.gov/pubmed/34934100
http://dx.doi.org/10.1038/s41598-021-03770-2
work_keys_str_mv AT ouyangtongjie vsuganunifyvoicestylebasedonspectrogramandgeneratedadversarialnetworks
AT yangzhijun vsuganunifyvoicestylebasedonspectrogramandgeneratedadversarialnetworks
AT xiehuilong vsuganunifyvoicestylebasedonspectrogramandgeneratedadversarialnetworks
AT hutianlin vsuganunifyvoicestylebasedonspectrogramandgeneratedadversarialnetworks
AT liuqingmei vsuganunifyvoicestylebasedonspectrogramandgeneratedadversarialnetworks