Cargando…
Articulation constrained learning with application to speech emotion recognition
Speech emotion recognition methods combining articulatory information with acoustic features have been previously shown to improve recognition performance. Collection of articulatory data on a large scale may not be feasible in many scenarios, thus restricting the scope and applicability of such met...
Autores principales: | Shah, Mohit, Tu, Ming, Berisha, Visar, Chakrabarti, Chaitali, Spanias, Andreas |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer International Publishing
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6919554/ https://www.ncbi.nlm.nih.gov/pubmed/31853252 http://dx.doi.org/10.1186/s13636-019-0157-9 |
Ejemplares similares
-
Speaker Recognition Using Constrained Convolutional Neural Networks in Emotional Speech
por: Simić, Nikola, et al.
Publicado: (2022) -
TorchDIVA: An extensible computational model of speech production built on an open-source machine learning library
por: Kinahan, Sean P., et al.
Publicado: (2023) -
Speaker Adaptation on Articulation and Acoustics for Articulation-to-Speech Synthesis
por: Cao, Beiming, et al.
Publicado: (2022) -
Reliability and validity of a widely-available AI tool for assessment of stress based on speech
por: Yawer, Batul A., et al.
Publicado: (2023) -
A Contribution to the Mechanism of Articulate Speech
por: Carruthers, S. W.
Publicado: (1900)