Cargando…

An Abstraction-Based Framework for Neural Network Verification

Deep neural networks are increasingly being used as controllers for safety-critical systems. Because neural networks are opaque, certifying their correctness is a significant challenge. To address this issue, several neural network verification approaches have recently been proposed. However, these...

Descripción completa

Detalles Bibliográficos
Autores principales: Elboher, Yizhak Yisrael, Gottschlich, Justin, Katz, Guy
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7363186/
http://dx.doi.org/10.1007/978-3-030-53288-8_3
_version_ 1783559619803611136
author Elboher, Yizhak Yisrael
Gottschlich, Justin
Katz, Guy
author_facet Elboher, Yizhak Yisrael
Gottschlich, Justin
Katz, Guy
author_sort Elboher, Yizhak Yisrael
collection PubMed
description Deep neural networks are increasingly being used as controllers for safety-critical systems. Because neural networks are opaque, certifying their correctness is a significant challenge. To address this issue, several neural network verification approaches have recently been proposed. However, these approaches afford limited scalability, and applying them to large networks can be challenging. In this paper, we propose a framework that can enhance neural network verification techniques by using over-approximation to reduce the size of the network—thus making it more amenable to verification. We perform the approximation such that if the property holds for the smaller (abstract) network, it holds for the original as well. The over-approximation may be too coarse, in which case the underlying verification tool might return a spurious counterexample. Under such conditions, we perform counterexample-guided refinement to adjust the approximation, and then repeat the process. Our approach is orthogonal to, and can be integrated with, many existing verification techniques. For evaluation purposes, we integrate it with the recently proposed Marabou framework, and observe a significant improvement in Marabou’s performance. Our experiments demonstrate the great potential of our approach for verifying larger neural networks.
format Online
Article
Text
id pubmed-7363186
institution National Center for Biotechnology Information
language English
publishDate 2020
record_format MEDLINE/PubMed
spelling pubmed-73631862020-07-16 An Abstraction-Based Framework for Neural Network Verification Elboher, Yizhak Yisrael Gottschlich, Justin Katz, Guy Computer Aided Verification Article Deep neural networks are increasingly being used as controllers for safety-critical systems. Because neural networks are opaque, certifying their correctness is a significant challenge. To address this issue, several neural network verification approaches have recently been proposed. However, these approaches afford limited scalability, and applying them to large networks can be challenging. In this paper, we propose a framework that can enhance neural network verification techniques by using over-approximation to reduce the size of the network—thus making it more amenable to verification. We perform the approximation such that if the property holds for the smaller (abstract) network, it holds for the original as well. The over-approximation may be too coarse, in which case the underlying verification tool might return a spurious counterexample. Under such conditions, we perform counterexample-guided refinement to adjust the approximation, and then repeat the process. Our approach is orthogonal to, and can be integrated with, many existing verification techniques. For evaluation purposes, we integrate it with the recently proposed Marabou framework, and observe a significant improvement in Marabou’s performance. Our experiments demonstrate the great potential of our approach for verifying larger neural networks. 2020-06-13 /pmc/articles/PMC7363186/ http://dx.doi.org/10.1007/978-3-030-53288-8_3 Text en © The Author(s) 2020 Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
spellingShingle Article
Elboher, Yizhak Yisrael
Gottschlich, Justin
Katz, Guy
An Abstraction-Based Framework for Neural Network Verification
title An Abstraction-Based Framework for Neural Network Verification
title_full An Abstraction-Based Framework for Neural Network Verification
title_fullStr An Abstraction-Based Framework for Neural Network Verification
title_full_unstemmed An Abstraction-Based Framework for Neural Network Verification
title_short An Abstraction-Based Framework for Neural Network Verification
title_sort abstraction-based framework for neural network verification
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7363186/
http://dx.doi.org/10.1007/978-3-030-53288-8_3
work_keys_str_mv AT elboheryizhakyisrael anabstractionbasedframeworkforneuralnetworkverification
AT gottschlichjustin anabstractionbasedframeworkforneuralnetworkverification
AT katzguy anabstractionbasedframeworkforneuralnetworkverification
AT elboheryizhakyisrael abstractionbasedframeworkforneuralnetworkverification
AT gottschlichjustin abstractionbasedframeworkforneuralnetworkverification
AT katzguy abstractionbasedframeworkforneuralnetworkverification