Cargando…
Deep user identification model with multiple biometric data
BACKGROUND: Recognition is an essential function of human beings. Humans easily recognize a person using various inputs such as voice, face, or gesture. In this study, we mainly focus on DL model with multi-modality which has many benefits including noise reduction. We used ResNet-50 for extracting...
Autores principales: | , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7367324/ https://www.ncbi.nlm.nih.gov/pubmed/32677882 http://dx.doi.org/10.1186/s12859-020-03613-3 |
_version_ | 1783560401962663936 |
---|---|
author | Song, Hyoung-Kyu AlAlkeem, Ebrahim Yun, Jaewoong Kim, Tae-Ho Yoo, Hyerin Heo, Dasom Chae, Myungsu Yeob Yeun, Chan |
author_facet | Song, Hyoung-Kyu AlAlkeem, Ebrahim Yun, Jaewoong Kim, Tae-Ho Yoo, Hyerin Heo, Dasom Chae, Myungsu Yeob Yeun, Chan |
author_sort | Song, Hyoung-Kyu |
collection | PubMed |
description | BACKGROUND: Recognition is an essential function of human beings. Humans easily recognize a person using various inputs such as voice, face, or gesture. In this study, we mainly focus on DL model with multi-modality which has many benefits including noise reduction. We used ResNet-50 for extracting features from dataset with 2D data. RESULTS: This study proposes a novel multimodal and multitask model, which can both identify human ID and classify the gender in single step. At the feature level, the extracted features are concatenated as the input for the identification module. Additionally, in our model design, we can change the number of modalities used in a single model. To demonstrate our model, we generate 58 virtual subjects with public ECG, face and fingerprint dataset. Through the test with noisy input, using multimodal is more robust and better than using single modality. CONCLUSIONS: This paper presents an end-to-end approach for multimodal and multitask learning. The proposed model shows robustness on the spoof attack, which can be significant for bio-authentication device. Through results in this study, we suggest a new perspective for human identification task, which performs better than in previous approaches. |
format | Online Article Text |
id | pubmed-7367324 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-73673242020-07-20 Deep user identification model with multiple biometric data Song, Hyoung-Kyu AlAlkeem, Ebrahim Yun, Jaewoong Kim, Tae-Ho Yoo, Hyerin Heo, Dasom Chae, Myungsu Yeob Yeun, Chan BMC Bioinformatics Research BACKGROUND: Recognition is an essential function of human beings. Humans easily recognize a person using various inputs such as voice, face, or gesture. In this study, we mainly focus on DL model with multi-modality which has many benefits including noise reduction. We used ResNet-50 for extracting features from dataset with 2D data. RESULTS: This study proposes a novel multimodal and multitask model, which can both identify human ID and classify the gender in single step. At the feature level, the extracted features are concatenated as the input for the identification module. Additionally, in our model design, we can change the number of modalities used in a single model. To demonstrate our model, we generate 58 virtual subjects with public ECG, face and fingerprint dataset. Through the test with noisy input, using multimodal is more robust and better than using single modality. CONCLUSIONS: This paper presents an end-to-end approach for multimodal and multitask learning. The proposed model shows robustness on the spoof attack, which can be significant for bio-authentication device. Through results in this study, we suggest a new perspective for human identification task, which performs better than in previous approaches. BioMed Central 2020-07-16 /pmc/articles/PMC7367324/ /pubmed/32677882 http://dx.doi.org/10.1186/s12859-020-03613-3 Text en © The Author(s) 2020 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. |
spellingShingle | Research Song, Hyoung-Kyu AlAlkeem, Ebrahim Yun, Jaewoong Kim, Tae-Ho Yoo, Hyerin Heo, Dasom Chae, Myungsu Yeob Yeun, Chan Deep user identification model with multiple biometric data |
title | Deep user identification model with multiple biometric data |
title_full | Deep user identification model with multiple biometric data |
title_fullStr | Deep user identification model with multiple biometric data |
title_full_unstemmed | Deep user identification model with multiple biometric data |
title_short | Deep user identification model with multiple biometric data |
title_sort | deep user identification model with multiple biometric data |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7367324/ https://www.ncbi.nlm.nih.gov/pubmed/32677882 http://dx.doi.org/10.1186/s12859-020-03613-3 |
work_keys_str_mv | AT songhyoungkyu deepuseridentificationmodelwithmultiplebiometricdata AT alalkeemebrahim deepuseridentificationmodelwithmultiplebiometricdata AT yunjaewoong deepuseridentificationmodelwithmultiplebiometricdata AT kimtaeho deepuseridentificationmodelwithmultiplebiometricdata AT yoohyerin deepuseridentificationmodelwithmultiplebiometricdata AT heodasom deepuseridentificationmodelwithmultiplebiometricdata AT chaemyungsu deepuseridentificationmodelwithmultiplebiometricdata AT yeobyeunchan deepuseridentificationmodelwithmultiplebiometricdata |