Cargando…

A Systematic Review on Model Watermarking for Neural Networks

Machine learning (ML) models are applied in an increasing variety of domains. The availability of large amounts of data and computational resources encourages the development of ever more complex and valuable models. These models are considered the intellectual property of the legitimate parties who...

Descripción completa

Detalles Bibliográficos
Autor principal: Boenisch, Franziska
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8667341/
https://www.ncbi.nlm.nih.gov/pubmed/34913032
http://dx.doi.org/10.3389/fdata.2021.729663
_version_ 1784614371235725312
author Boenisch, Franziska
author_facet Boenisch, Franziska
author_sort Boenisch, Franziska
collection PubMed
description Machine learning (ML) models are applied in an increasing variety of domains. The availability of large amounts of data and computational resources encourages the development of ever more complex and valuable models. These models are considered the intellectual property of the legitimate parties who have trained them, which makes their protection against stealing, illegitimate redistribution, and unauthorized application an urgent need. Digital watermarking presents a strong mechanism for marking model ownership and, thereby, offers protection against those threats. This work presents a taxonomy identifying and analyzing different classes of watermarking schemes for ML models. It introduces a unified threat model to allow structured reasoning on and comparison of the effectiveness of watermarking methods in different scenarios. Furthermore, it systematizes desired security requirements and attacks against ML model watermarking. Based on that framework, representative literature from the field is surveyed to illustrate the taxonomy. Finally, shortcomings and general limitations of existing approaches are discussed, and an outlook on future research directions is given.
format Online
Article
Text
id pubmed-8667341
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-86673412021-12-14 A Systematic Review on Model Watermarking for Neural Networks Boenisch, Franziska Front Big Data Big Data Machine learning (ML) models are applied in an increasing variety of domains. The availability of large amounts of data and computational resources encourages the development of ever more complex and valuable models. These models are considered the intellectual property of the legitimate parties who have trained them, which makes their protection against stealing, illegitimate redistribution, and unauthorized application an urgent need. Digital watermarking presents a strong mechanism for marking model ownership and, thereby, offers protection against those threats. This work presents a taxonomy identifying and analyzing different classes of watermarking schemes for ML models. It introduces a unified threat model to allow structured reasoning on and comparison of the effectiveness of watermarking methods in different scenarios. Furthermore, it systematizes desired security requirements and attacks against ML model watermarking. Based on that framework, representative literature from the field is surveyed to illustrate the taxonomy. Finally, shortcomings and general limitations of existing approaches are discussed, and an outlook on future research directions is given. Frontiers Media S.A. 2021-11-29 /pmc/articles/PMC8667341/ /pubmed/34913032 http://dx.doi.org/10.3389/fdata.2021.729663 Text en Copyright © 2021 Boenisch. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Big Data
Boenisch, Franziska
A Systematic Review on Model Watermarking for Neural Networks
title A Systematic Review on Model Watermarking for Neural Networks
title_full A Systematic Review on Model Watermarking for Neural Networks
title_fullStr A Systematic Review on Model Watermarking for Neural Networks
title_full_unstemmed A Systematic Review on Model Watermarking for Neural Networks
title_short A Systematic Review on Model Watermarking for Neural Networks
title_sort systematic review on model watermarking for neural networks
topic Big Data
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8667341/
https://www.ncbi.nlm.nih.gov/pubmed/34913032
http://dx.doi.org/10.3389/fdata.2021.729663
work_keys_str_mv AT boenischfranziska asystematicreviewonmodelwatermarkingforneuralnetworks
AT boenischfranziska systematicreviewonmodelwatermarkingforneuralnetworks