Cargando…

Distributed learning: a reliable privacy-preserving strategy to change multicenter collaborations using AI

PURPOSE: The present scoping review aims to assess the non-inferiority of distributed learning over centrally and locally trained machine learning (ML) models in medical applications. METHODS: We performed a literature search using the term “distributed learning” OR “federated learning” in the PubMe...

Descripción completa

Detalles Bibliográficos
Autores principales: Kirienko, Margarita, Sollini, Martina, Ninatti, Gaia, Loiacono, Daniele, Giacomello, Edoardo, Gozzi, Noemi, Amigoni, Francesco, Mainardi, Luca, Lanzi, Pier Luca, Chiti, Arturo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Berlin Heidelberg 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8041944/
https://www.ncbi.nlm.nih.gov/pubmed/33847779
http://dx.doi.org/10.1007/s00259-021-05339-7
_version_ 1783678037496168448
author Kirienko, Margarita
Sollini, Martina
Ninatti, Gaia
Loiacono, Daniele
Giacomello, Edoardo
Gozzi, Noemi
Amigoni, Francesco
Mainardi, Luca
Lanzi, Pier Luca
Chiti, Arturo
author_facet Kirienko, Margarita
Sollini, Martina
Ninatti, Gaia
Loiacono, Daniele
Giacomello, Edoardo
Gozzi, Noemi
Amigoni, Francesco
Mainardi, Luca
Lanzi, Pier Luca
Chiti, Arturo
author_sort Kirienko, Margarita
collection PubMed
description PURPOSE: The present scoping review aims to assess the non-inferiority of distributed learning over centrally and locally trained machine learning (ML) models in medical applications. METHODS: We performed a literature search using the term “distributed learning” OR “federated learning” in the PubMed/MEDLINE and EMBASE databases. No start date limit was used, and the search was extended until July 21, 2020. We excluded articles outside the field of interest; guidelines or expert opinion, review articles and meta-analyses, editorials, letters or commentaries, and conference abstracts; articles not in the English language; and studies not using medical data. Selected studies were classified and analysed according to their aim(s). RESULTS: We included 26 papers aimed at predicting one or more outcomes: namely risk, diagnosis, prognosis, and treatment side effect/adverse drug reaction. Distributed learning was compared to centralized or localized training in 21/26 and 14/26 selected papers, respectively. Regardless of the aim, the type of input, the method, and the classifier, distributed learning performed close to centralized training, but two experiments focused on diagnosis. In all but 2 cases, distributed learning outperformed locally trained models. CONCLUSION: Distributed learning resulted in a reliable strategy for model development; indeed, it performed equally to models trained on centralized datasets. Sensitive data can get preserved since they are not shared for model development. Distributed learning constitutes a promising solution for ML-based research and practice since large, diverse datasets are crucial for success.
format Online
Article
Text
id pubmed-8041944
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Springer Berlin Heidelberg
record_format MEDLINE/PubMed
spelling pubmed-80419442021-04-13 Distributed learning: a reliable privacy-preserving strategy to change multicenter collaborations using AI Kirienko, Margarita Sollini, Martina Ninatti, Gaia Loiacono, Daniele Giacomello, Edoardo Gozzi, Noemi Amigoni, Francesco Mainardi, Luca Lanzi, Pier Luca Chiti, Arturo Eur J Nucl Med Mol Imaging Review Article PURPOSE: The present scoping review aims to assess the non-inferiority of distributed learning over centrally and locally trained machine learning (ML) models in medical applications. METHODS: We performed a literature search using the term “distributed learning” OR “federated learning” in the PubMed/MEDLINE and EMBASE databases. No start date limit was used, and the search was extended until July 21, 2020. We excluded articles outside the field of interest; guidelines or expert opinion, review articles and meta-analyses, editorials, letters or commentaries, and conference abstracts; articles not in the English language; and studies not using medical data. Selected studies were classified and analysed according to their aim(s). RESULTS: We included 26 papers aimed at predicting one or more outcomes: namely risk, diagnosis, prognosis, and treatment side effect/adverse drug reaction. Distributed learning was compared to centralized or localized training in 21/26 and 14/26 selected papers, respectively. Regardless of the aim, the type of input, the method, and the classifier, distributed learning performed close to centralized training, but two experiments focused on diagnosis. In all but 2 cases, distributed learning outperformed locally trained models. CONCLUSION: Distributed learning resulted in a reliable strategy for model development; indeed, it performed equally to models trained on centralized datasets. Sensitive data can get preserved since they are not shared for model development. Distributed learning constitutes a promising solution for ML-based research and practice since large, diverse datasets are crucial for success. Springer Berlin Heidelberg 2021-04-13 2021 /pmc/articles/PMC8041944/ /pubmed/33847779 http://dx.doi.org/10.1007/s00259-021-05339-7 Text en © The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2021 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Review Article
Kirienko, Margarita
Sollini, Martina
Ninatti, Gaia
Loiacono, Daniele
Giacomello, Edoardo
Gozzi, Noemi
Amigoni, Francesco
Mainardi, Luca
Lanzi, Pier Luca
Chiti, Arturo
Distributed learning: a reliable privacy-preserving strategy to change multicenter collaborations using AI
title Distributed learning: a reliable privacy-preserving strategy to change multicenter collaborations using AI
title_full Distributed learning: a reliable privacy-preserving strategy to change multicenter collaborations using AI
title_fullStr Distributed learning: a reliable privacy-preserving strategy to change multicenter collaborations using AI
title_full_unstemmed Distributed learning: a reliable privacy-preserving strategy to change multicenter collaborations using AI
title_short Distributed learning: a reliable privacy-preserving strategy to change multicenter collaborations using AI
title_sort distributed learning: a reliable privacy-preserving strategy to change multicenter collaborations using ai
topic Review Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8041944/
https://www.ncbi.nlm.nih.gov/pubmed/33847779
http://dx.doi.org/10.1007/s00259-021-05339-7
work_keys_str_mv AT kirienkomargarita distributedlearningareliableprivacypreservingstrategytochangemulticentercollaborationsusingai
AT sollinimartina distributedlearningareliableprivacypreservingstrategytochangemulticentercollaborationsusingai
AT ninattigaia distributedlearningareliableprivacypreservingstrategytochangemulticentercollaborationsusingai
AT loiaconodaniele distributedlearningareliableprivacypreservingstrategytochangemulticentercollaborationsusingai
AT giacomelloedoardo distributedlearningareliableprivacypreservingstrategytochangemulticentercollaborationsusingai
AT gozzinoemi distributedlearningareliableprivacypreservingstrategytochangemulticentercollaborationsusingai
AT amigonifrancesco distributedlearningareliableprivacypreservingstrategytochangemulticentercollaborationsusingai
AT mainardiluca distributedlearningareliableprivacypreservingstrategytochangemulticentercollaborationsusingai
AT lanzipierluca distributedlearningareliableprivacypreservingstrategytochangemulticentercollaborationsusingai
AT chitiarturo distributedlearningareliableprivacypreservingstrategytochangemulticentercollaborationsusingai