Cargando…

Correlated lip motion and voice audio data

This data set is comprised of correlated audio and lip movement data in multiple videos of multiple subjects reading the same text. It was collected to facilitate the development and validation of algorithms used to train and test a compound biometric system that consists of lip-motion and voice rec...

Descripción completa

Detalles Bibliográficos
Autores principales: Colasito, Marco, Straub, Jeremy, Kotala, Pratap
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6218630/
https://www.ncbi.nlm.nih.gov/pubmed/30417045
http://dx.doi.org/10.1016/j.dib.2018.10.043
_version_ 1783368496816586752
author Colasito, Marco
Straub, Jeremy
Kotala, Pratap
author_facet Colasito, Marco
Straub, Jeremy
Kotala, Pratap
author_sort Colasito, Marco
collection PubMed
description This data set is comprised of correlated audio and lip movement data in multiple videos of multiple subjects reading the same text. It was collected to facilitate the development and validation of algorithms used to train and test a compound biometric system that consists of lip-motion and voice recognition. The data set is a collection of videos of volunteers reciting a fixed script that is intended to be used to train software to recognize voice and lip-motion patterns. A second video is included of the individual reciting a shorter phrase, which is designed to be used to test the recognition functionality of the system. The recordings were collected in a controlled, indoor setting with a 4K professional-grade camcorder and adjustable, LED lights.
format Online
Article
Text
id pubmed-6218630
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-62186302018-11-09 Correlated lip motion and voice audio data Colasito, Marco Straub, Jeremy Kotala, Pratap Data Brief Computer Science This data set is comprised of correlated audio and lip movement data in multiple videos of multiple subjects reading the same text. It was collected to facilitate the development and validation of algorithms used to train and test a compound biometric system that consists of lip-motion and voice recognition. The data set is a collection of videos of volunteers reciting a fixed script that is intended to be used to train software to recognize voice and lip-motion patterns. A second video is included of the individual reciting a shorter phrase, which is designed to be used to test the recognition functionality of the system. The recordings were collected in a controlled, indoor setting with a 4K professional-grade camcorder and adjustable, LED lights. Elsevier 2018-10-18 /pmc/articles/PMC6218630/ /pubmed/30417045 http://dx.doi.org/10.1016/j.dib.2018.10.043 Text en © 2018 The Authors http://creativecommons.org/licenses/by/4.0/ This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Computer Science
Colasito, Marco
Straub, Jeremy
Kotala, Pratap
Correlated lip motion and voice audio data
title Correlated lip motion and voice audio data
title_full Correlated lip motion and voice audio data
title_fullStr Correlated lip motion and voice audio data
title_full_unstemmed Correlated lip motion and voice audio data
title_short Correlated lip motion and voice audio data
title_sort correlated lip motion and voice audio data
topic Computer Science
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6218630/
https://www.ncbi.nlm.nih.gov/pubmed/30417045
http://dx.doi.org/10.1016/j.dib.2018.10.043
work_keys_str_mv AT colasitomarco correlatedlipmotionandvoiceaudiodata
AT straubjeremy correlatedlipmotionandvoiceaudiodata
AT kotalapratap correlatedlipmotionandvoiceaudiodata