Cargando…
A Genetic Attack Against Machine Learning Classifiers to Steal Biometric Actigraphy Profiles from Health Related Sensor Data
In this work, we propose the use of a genetic-algorithm-based attack against machine learning classifiers with the aim of ‘stealing’ users’ biometric actigraphy profiles from health related sensor data. The target classification model uses daily actigraphy patterns for user identification. The biome...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer US
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7497442/ https://www.ncbi.nlm.nih.gov/pubmed/32929615 http://dx.doi.org/10.1007/s10916-020-01646-y |
Sumario: | In this work, we propose the use of a genetic-algorithm-based attack against machine learning classifiers with the aim of ‘stealing’ users’ biometric actigraphy profiles from health related sensor data. The target classification model uses daily actigraphy patterns for user identification. The biometric profiles are modeled as what we call impersonator examples which are generated based solely on the predictions’ confidence score by repeatedly querying the target classifier. We conducted experiments in a black-box setting on a public dataset that contains actigraphy profiles from 55 individuals. The data consists of daily motion patterns recorded with an actigraphy device. These patterns can be used as biometric profiles to identify each individual. Our attack was able to generate examples capable of impersonating a target user with a success rate of 94.5%. Furthermore, we found that the impersonator examples have high transferability to other classifiers trained with the same training set. We also show that the generated biometric profiles have a close resemblance to the ground truth profiles which can lead to sensitive data exposure, like revealing the time of the day an individual wakes-up and goes to bed. |
---|