Cargando…
Disclosure control of machine learning models from trusted research environments (TRE): New challenges and opportunities
INTRODUCTION: Artificial intelligence (AI) applications in healthcare and medicine have increased in recent years. To enable access to personal data, Trusted Research Environments (TREs) (otherwise known as Safe Havens) provide safe and secure environments in which researchers can access sensitive p...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Elsevier
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10130764/ https://www.ncbi.nlm.nih.gov/pubmed/37123891 http://dx.doi.org/10.1016/j.heliyon.2023.e15143 |
_version_ | 1785031028483555328 |
---|---|
author | Mansouri-Benssassi, Esma Rogers, Simon Reel, Smarti Malone, Maeve Smith, Jim Ritchie, Felix Jefferson, Emily |
author_facet | Mansouri-Benssassi, Esma Rogers, Simon Reel, Smarti Malone, Maeve Smith, Jim Ritchie, Felix Jefferson, Emily |
author_sort | Mansouri-Benssassi, Esma |
collection | PubMed |
description | INTRODUCTION: Artificial intelligence (AI) applications in healthcare and medicine have increased in recent years. To enable access to personal data, Trusted Research Environments (TREs) (otherwise known as Safe Havens) provide safe and secure environments in which researchers can access sensitive personal data and develop AI (in particular machine learning (ML)) models. However, currently few TREs support the training of ML models in part due to a gap in the practical decision-making guidance for TREs in handling model disclosure. Specifically, the training of ML models creates a need to disclose new types of outputs from TREs. Although TREs have clear policies for the disclosure of statistical outputs, the extent to which trained models can leak personal training data once released is not well understood. BACKGROUND: We review, for a general audience, different types of ML models and their applicability within healthcare. We explain the outputs from training a ML model and how trained ML models can be vulnerable to external attacks to discover personal data encoded within the model. RISKS: We present the challenges for disclosure control of trained ML models in the context of training and exporting models from TREs. We provide insights and analyse methods that could be introduced within TREs to mitigate the risk of privacy breaches when disclosing trained models. DISCUSSION: Although specific guidelines and policies exist for statistical disclosure controls in TREs, they do not satisfactorily address these new types of output requests; i.e., trained ML models. There is significant potential for new interdisciplinary research opportunities in developing and adapting policies and tools for safely disclosing ML outputs from TREs. |
format | Online Article Text |
id | pubmed-10130764 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Elsevier |
record_format | MEDLINE/PubMed |
spelling | pubmed-101307642023-04-27 Disclosure control of machine learning models from trusted research environments (TRE): New challenges and opportunities Mansouri-Benssassi, Esma Rogers, Simon Reel, Smarti Malone, Maeve Smith, Jim Ritchie, Felix Jefferson, Emily Heliyon Review Article INTRODUCTION: Artificial intelligence (AI) applications in healthcare and medicine have increased in recent years. To enable access to personal data, Trusted Research Environments (TREs) (otherwise known as Safe Havens) provide safe and secure environments in which researchers can access sensitive personal data and develop AI (in particular machine learning (ML)) models. However, currently few TREs support the training of ML models in part due to a gap in the practical decision-making guidance for TREs in handling model disclosure. Specifically, the training of ML models creates a need to disclose new types of outputs from TREs. Although TREs have clear policies for the disclosure of statistical outputs, the extent to which trained models can leak personal training data once released is not well understood. BACKGROUND: We review, for a general audience, different types of ML models and their applicability within healthcare. We explain the outputs from training a ML model and how trained ML models can be vulnerable to external attacks to discover personal data encoded within the model. RISKS: We present the challenges for disclosure control of trained ML models in the context of training and exporting models from TREs. We provide insights and analyse methods that could be introduced within TREs to mitigate the risk of privacy breaches when disclosing trained models. DISCUSSION: Although specific guidelines and policies exist for statistical disclosure controls in TREs, they do not satisfactorily address these new types of output requests; i.e., trained ML models. There is significant potential for new interdisciplinary research opportunities in developing and adapting policies and tools for safely disclosing ML outputs from TREs. Elsevier 2023-04-03 /pmc/articles/PMC10130764/ /pubmed/37123891 http://dx.doi.org/10.1016/j.heliyon.2023.e15143 Text en © 2023 The Authors https://creativecommons.org/licenses/by/4.0/This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Review Article Mansouri-Benssassi, Esma Rogers, Simon Reel, Smarti Malone, Maeve Smith, Jim Ritchie, Felix Jefferson, Emily Disclosure control of machine learning models from trusted research environments (TRE): New challenges and opportunities |
title | Disclosure control of machine learning models from trusted research environments (TRE): New challenges and opportunities |
title_full | Disclosure control of machine learning models from trusted research environments (TRE): New challenges and opportunities |
title_fullStr | Disclosure control of machine learning models from trusted research environments (TRE): New challenges and opportunities |
title_full_unstemmed | Disclosure control of machine learning models from trusted research environments (TRE): New challenges and opportunities |
title_short | Disclosure control of machine learning models from trusted research environments (TRE): New challenges and opportunities |
title_sort | disclosure control of machine learning models from trusted research environments (tre): new challenges and opportunities |
topic | Review Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10130764/ https://www.ncbi.nlm.nih.gov/pubmed/37123891 http://dx.doi.org/10.1016/j.heliyon.2023.e15143 |
work_keys_str_mv | AT mansouribenssassiesma disclosurecontrolofmachinelearningmodelsfromtrustedresearchenvironmentstrenewchallengesandopportunities AT rogerssimon disclosurecontrolofmachinelearningmodelsfromtrustedresearchenvironmentstrenewchallengesandopportunities AT reelsmarti disclosurecontrolofmachinelearningmodelsfromtrustedresearchenvironmentstrenewchallengesandopportunities AT malonemaeve disclosurecontrolofmachinelearningmodelsfromtrustedresearchenvironmentstrenewchallengesandopportunities AT smithjim disclosurecontrolofmachinelearningmodelsfromtrustedresearchenvironmentstrenewchallengesandopportunities AT ritchiefelix disclosurecontrolofmachinelearningmodelsfromtrustedresearchenvironmentstrenewchallengesandopportunities AT jeffersonemily disclosurecontrolofmachinelearningmodelsfromtrustedresearchenvironmentstrenewchallengesandopportunities |