Cargando…
ADFAC: Automatic detection of facial articulatory features
Using computer-vision and image processing techniques, we aim to identify specific visual cues as induced by facial movements made during monosyllabic speech production. The method is named ADFAC: Automatic Detection of Facial Articulatory Cues. Four facial points of interest were detected automatic...
Autores principales: | Garg, Saurabh, Hamarneh, Ghassan, Jongman, Allard, Sereno, Joan A., Wang, Yue |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Elsevier
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7393529/ https://www.ncbi.nlm.nih.gov/pubmed/32760662 http://dx.doi.org/10.1016/j.mex.2020.101006 |
Ejemplares similares
-
Plain-to-clear speech video conversion for enhanced intelligibility
por: Sachdeva, Shubam, et al.
Publicado: (2023) -
Generating post-hoc explanation from deep neural networks for multi-modal medical image analysis tasks
por: Jin, Weina, et al.
Publicado: (2023) -
Deep Learning-Based Detection of Articulatory Features in Arabic and English Speech
por: Algabri, Mohammed, et al.
Publicado: (2021) -
Speech recognition using articulatory and excitation source features
por: Rao, K Sreenivasa, et al.
Publicado: (2017) -
Speaker Sex Influences Processing of Grammatical Gender
por: Vitevitch, Michael S., et al.
Publicado: (2013)