Cargando…

Predicting facility-based delivery in Zanzibar: The vulnerability of machine learning algorithms to adversarial attacks

BACKGROUND: Community health worker (CHW)-led maternal health programs have contributed to increased facility-based deliveries and decreased maternal mortality in sub-Saharan Africa. The recent adoption of mobile devices in these programs provides an opportunity for real-time implementation of machi...

Descripción completa

Detalles Bibliográficos
Autores principales: Tsai, Yi-Ting, Fulcher, Isabel R., Li, Tracey, Sukums, Felix, Hedt-Gauthier, Bethany
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10205516/
https://www.ncbi.nlm.nih.gov/pubmed/37234636
http://dx.doi.org/10.1016/j.heliyon.2023.e16244
_version_ 1785046057246261248
author Tsai, Yi-Ting
Fulcher, Isabel R.
Li, Tracey
Sukums, Felix
Hedt-Gauthier, Bethany
author_facet Tsai, Yi-Ting
Fulcher, Isabel R.
Li, Tracey
Sukums, Felix
Hedt-Gauthier, Bethany
author_sort Tsai, Yi-Ting
collection PubMed
description BACKGROUND: Community health worker (CHW)-led maternal health programs have contributed to increased facility-based deliveries and decreased maternal mortality in sub-Saharan Africa. The recent adoption of mobile devices in these programs provides an opportunity for real-time implementation of machine learning predictive models to identify women most at risk for home-based delivery. However, it is possible that falsified data could be entered into the model to get a specific prediction result – known as an “adversarial attack”. The goal of this paper is to evaluate the algorithm's vulnerability to adversarial attacks. METHODS: The dataset used in this research is from the Uzazi Salama (“Safer Deliveries”) program, which operated between 2016 and 2019 in Zanzibar. We used LASSO regularized logistic regression to develop the prediction model. We used “One-At-a-Time (OAT)” adversarial attacks across four different types of input variables: binary – access to electricity at home, categorical – previous delivery location, ordinal – educational level, and continuous – gestational age. We evaluated the percent of predicted classifications that change due to these adversarial attacks. RESULTS: Manipulating input variables affected prediction results. The variable with the greatest vulnerability was previous delivery location, with 55.65% of predicted classifications changing when applying adversarial attacks from previously delivered at a facility to previously delivered at home, and 37.63% of predicted classifications changing when applying adversarial attacks from previously delivered at home to previously delivered at a facility. CONCLUSION: This paper investigates the vulnerability of an algorithm to predict facility-based delivery when facing adversarial attacks. By understanding the effect of adversarial attacks, programs can implement data monitoring strategies to assess for and deter these manipulations. Ensuring fidelity in algorithm deployment secures that CHWs target those women who are actually at high risk of delivering at home.
format Online
Article
Text
id pubmed-10205516
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-102055162023-05-25 Predicting facility-based delivery in Zanzibar: The vulnerability of machine learning algorithms to adversarial attacks Tsai, Yi-Ting Fulcher, Isabel R. Li, Tracey Sukums, Felix Hedt-Gauthier, Bethany Heliyon Research Article BACKGROUND: Community health worker (CHW)-led maternal health programs have contributed to increased facility-based deliveries and decreased maternal mortality in sub-Saharan Africa. The recent adoption of mobile devices in these programs provides an opportunity for real-time implementation of machine learning predictive models to identify women most at risk for home-based delivery. However, it is possible that falsified data could be entered into the model to get a specific prediction result – known as an “adversarial attack”. The goal of this paper is to evaluate the algorithm's vulnerability to adversarial attacks. METHODS: The dataset used in this research is from the Uzazi Salama (“Safer Deliveries”) program, which operated between 2016 and 2019 in Zanzibar. We used LASSO regularized logistic regression to develop the prediction model. We used “One-At-a-Time (OAT)” adversarial attacks across four different types of input variables: binary – access to electricity at home, categorical – previous delivery location, ordinal – educational level, and continuous – gestational age. We evaluated the percent of predicted classifications that change due to these adversarial attacks. RESULTS: Manipulating input variables affected prediction results. The variable with the greatest vulnerability was previous delivery location, with 55.65% of predicted classifications changing when applying adversarial attacks from previously delivered at a facility to previously delivered at home, and 37.63% of predicted classifications changing when applying adversarial attacks from previously delivered at home to previously delivered at a facility. CONCLUSION: This paper investigates the vulnerability of an algorithm to predict facility-based delivery when facing adversarial attacks. By understanding the effect of adversarial attacks, programs can implement data monitoring strategies to assess for and deter these manipulations. Ensuring fidelity in algorithm deployment secures that CHWs target those women who are actually at high risk of delivering at home. Elsevier 2023-05-13 /pmc/articles/PMC10205516/ /pubmed/37234636 http://dx.doi.org/10.1016/j.heliyon.2023.e16244 Text en © 2023 The Authors. Published by Elsevier Ltd. https://creativecommons.org/licenses/by-nc-nd/4.0/This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
spellingShingle Research Article
Tsai, Yi-Ting
Fulcher, Isabel R.
Li, Tracey
Sukums, Felix
Hedt-Gauthier, Bethany
Predicting facility-based delivery in Zanzibar: The vulnerability of machine learning algorithms to adversarial attacks
title Predicting facility-based delivery in Zanzibar: The vulnerability of machine learning algorithms to adversarial attacks
title_full Predicting facility-based delivery in Zanzibar: The vulnerability of machine learning algorithms to adversarial attacks
title_fullStr Predicting facility-based delivery in Zanzibar: The vulnerability of machine learning algorithms to adversarial attacks
title_full_unstemmed Predicting facility-based delivery in Zanzibar: The vulnerability of machine learning algorithms to adversarial attacks
title_short Predicting facility-based delivery in Zanzibar: The vulnerability of machine learning algorithms to adversarial attacks
title_sort predicting facility-based delivery in zanzibar: the vulnerability of machine learning algorithms to adversarial attacks
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10205516/
https://www.ncbi.nlm.nih.gov/pubmed/37234636
http://dx.doi.org/10.1016/j.heliyon.2023.e16244
work_keys_str_mv AT tsaiyiting predictingfacilitybaseddeliveryinzanzibarthevulnerabilityofmachinelearningalgorithmstoadversarialattacks
AT fulcherisabelr predictingfacilitybaseddeliveryinzanzibarthevulnerabilityofmachinelearningalgorithmstoadversarialattacks
AT litracey predictingfacilitybaseddeliveryinzanzibarthevulnerabilityofmachinelearningalgorithmstoadversarialattacks
AT sukumsfelix predictingfacilitybaseddeliveryinzanzibarthevulnerabilityofmachinelearningalgorithmstoadversarialattacks
AT hedtgauthierbethany predictingfacilitybaseddeliveryinzanzibarthevulnerabilityofmachinelearningalgorithmstoadversarialattacks