Cargando…

Federated transfer learning for auxiliary classifier generative adversarial networks: framework and industrial application

Machine learning with considering data privacy-preservation and personalized models has received attentions, especially in the manufacturing field. The data often exist in the form of isolated islands and cannot be shared because of data privacy in real industrial scenarios. It is difficult to gathe...

Descripción completa

Detalles Bibliográficos
Autores principales: Guo, Wei, Wang, Yijin, Chen, Xin, Jiang, Pingyu
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10162656/
https://www.ncbi.nlm.nih.gov/pubmed/37361337
http://dx.doi.org/10.1007/s10845-023-02126-z
Descripción
Sumario:Machine learning with considering data privacy-preservation and personalized models has received attentions, especially in the manufacturing field. The data often exist in the form of isolated islands and cannot be shared because of data privacy in real industrial scenarios. It is difficult to gather the data to train a personalized model without compromising data privacy. To address this issue, we proposed a Federated Transfer Learning framework based on Auxiliary Classifier Generative Adversarial Networks named ACGAN-FTL. In the framework, Federated Learning (FL) trains a global model on decentralized datasets of the clients with data privacy-preservation and Transfer Learning (TL) transfers the knowledge from the global model to a personalized model with a relatively small data volume. ACGAN acts as a data bridge to connect FL and TL by generating similar probability distribution data of clients since the client datasets in FL cannot be directly used in TL for data privacy-preservation. A real industrial scenario of pre-baked carbon anode quality prediction is applied to verify the performance of the proposed framework. The results show that ACGAN-FTL can not only obtain acceptable performance on 0.81 accuracy, 0.86 precision, 0.74 recall, and 0.79 F1 but also ensure data privacy-preservation in the whole learning process. Compared to the baseline method without FL and TL, the former metrics have increased by 13%, 11%, 16%, and 15% respectively. The experiments verify that the performance of the proposed ACGAN-FTL framework fulfills the requirements of industrial scenarios.