Cargando…

Modeling Threats to AI-ML Systems Using STRIDE †

The application of emerging technologies, such as Artificial Intelligence (AI), entails risks that need to be addressed to ensure secure and trustworthy socio-technical infrastructures. Machine Learning (ML), the most developed subfield of AI, allows for improved decision-making processes. However,...

Descripción completa

Detalles Bibliográficos
Autores principales: Mauri, Lara, Damiani, Ernesto
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9459912/
https://www.ncbi.nlm.nih.gov/pubmed/36081121
http://dx.doi.org/10.3390/s22176662
_version_ 1784786621196926976
author Mauri, Lara
Damiani, Ernesto
author_facet Mauri, Lara
Damiani, Ernesto
author_sort Mauri, Lara
collection PubMed
description The application of emerging technologies, such as Artificial Intelligence (AI), entails risks that need to be addressed to ensure secure and trustworthy socio-technical infrastructures. Machine Learning (ML), the most developed subfield of AI, allows for improved decision-making processes. However, ML models exhibit specific vulnerabilities that conventional IT systems are not subject to. As systems incorporating ML components become increasingly pervasive, the need to provide security practitioners with threat modeling tailored to the specific AI-ML pipeline is of paramount importance. Currently, there exist no well-established approach accounting for the entire ML life-cycle in the identification and analysis of threats targeting ML techniques. In this paper, we propose an asset-centered methodology—STRIDE-AI—for assessing the security of AI-ML-based systems. We discuss how to apply the FMEA process to identify how assets generated and used at different stages of the ML life-cycle may fail. By adapting Microsoft’s STRIDE approach to the AI-ML domain, we map potential ML failure modes to threats and security properties these threats may endanger. The proposed methodology can assist ML practitioners in choosing the most effective security controls to protect ML assets. We illustrate STRIDE-AI with the help of a real-world use case selected from the TOREADOR H2020 project.
format Online
Article
Text
id pubmed-9459912
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-94599122022-09-10 Modeling Threats to AI-ML Systems Using STRIDE † Mauri, Lara Damiani, Ernesto Sensors (Basel) Article The application of emerging technologies, such as Artificial Intelligence (AI), entails risks that need to be addressed to ensure secure and trustworthy socio-technical infrastructures. Machine Learning (ML), the most developed subfield of AI, allows for improved decision-making processes. However, ML models exhibit specific vulnerabilities that conventional IT systems are not subject to. As systems incorporating ML components become increasingly pervasive, the need to provide security practitioners with threat modeling tailored to the specific AI-ML pipeline is of paramount importance. Currently, there exist no well-established approach accounting for the entire ML life-cycle in the identification and analysis of threats targeting ML techniques. In this paper, we propose an asset-centered methodology—STRIDE-AI—for assessing the security of AI-ML-based systems. We discuss how to apply the FMEA process to identify how assets generated and used at different stages of the ML life-cycle may fail. By adapting Microsoft’s STRIDE approach to the AI-ML domain, we map potential ML failure modes to threats and security properties these threats may endanger. The proposed methodology can assist ML practitioners in choosing the most effective security controls to protect ML assets. We illustrate STRIDE-AI with the help of a real-world use case selected from the TOREADOR H2020 project. MDPI 2022-09-03 /pmc/articles/PMC9459912/ /pubmed/36081121 http://dx.doi.org/10.3390/s22176662 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Mauri, Lara
Damiani, Ernesto
Modeling Threats to AI-ML Systems Using STRIDE †
title Modeling Threats to AI-ML Systems Using STRIDE †
title_full Modeling Threats to AI-ML Systems Using STRIDE †
title_fullStr Modeling Threats to AI-ML Systems Using STRIDE †
title_full_unstemmed Modeling Threats to AI-ML Systems Using STRIDE †
title_short Modeling Threats to AI-ML Systems Using STRIDE †
title_sort modeling threats to ai-ml systems using stride †
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9459912/
https://www.ncbi.nlm.nih.gov/pubmed/36081121
http://dx.doi.org/10.3390/s22176662
work_keys_str_mv AT maurilara modelingthreatstoaimlsystemsusingstride
AT damianiernesto modelingthreatstoaimlsystemsusingstride