Cargando…

RNNCon: Contribution Coverage Testing for Stacked Recurrent Neural Networks

Recurrent Neural Networks (RNNs) are applied in safety-critical fields such as autonomous driving, aircraft collision detection, and smart credit. They are highly susceptible to input perturbations, but little research on RNN-oriented testing techniques has been conducted, leaving a threat to a larg...

Descripción completa

Detalles Bibliográficos
Autores principales: Du, Xiaoli, Zeng, Hongwei, Chen, Shengbo, Lei, Zhou
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10048185/
https://www.ncbi.nlm.nih.gov/pubmed/36981408
http://dx.doi.org/10.3390/e25030520
_version_ 1785014120986181632
author Du, Xiaoli
Zeng, Hongwei
Chen, Shengbo
Lei, Zhou
author_facet Du, Xiaoli
Zeng, Hongwei
Chen, Shengbo
Lei, Zhou
author_sort Du, Xiaoli
collection PubMed
description Recurrent Neural Networks (RNNs) are applied in safety-critical fields such as autonomous driving, aircraft collision detection, and smart credit. They are highly susceptible to input perturbations, but little research on RNN-oriented testing techniques has been conducted, leaving a threat to a large number of sequential application domains. To address these gaps, improve the test adequacy of RNNs, find more defects, and improve the performance of RNNs models and their robustness to input perturbations. We aim to propose a test coverage metric for the underlying structure of RNNs, which is used to guide the generation of test inputs to test RNNs. Although coverage metrics have been proposed for RNNs, such as the hidden state coverage in RNN-Test, they ignore the fact that the underlying structure of RNNs is still a fully connected neural network but with an additional “delayer” that records the network state at the time of data input. We use the contributions, i.e., the combination of the outputs of neurons and the weights they emit, as the minimum computational unit of RNNs to explore the finer-grained logical structure inside the recurrent cells. Compared to existing coverage metrics, our research covers the decision mechanism of RNNs in more detail and is more likely to generate more adversarial samples and discover more flaws in the model. In this paper, we redefine the contribution coverage metric applicable to Stacked LSTMs and Stacked GRUs by considering the joint effect of neurons and weights in the underlying structure of the neural network. We propose a new coverage metric, RNNCon, which can be used to guide the generation of adversarial test inputs. And we design and implement a test framework prototype RNNCon-Test. 2 datasets, 4 LSTM models, and 4 GRU models are used to verify the effectiveness of RNNCon-Test. Compared to the current state-of-the-art study RNN-Test, RNNCon can cover a deeper decision logic of RNNs. RNNCon-Test is not only effective in identifying defects in Deep Learning (DL) systems but also in improving the performance of the model if the adversarial inputs generated by RNNCon-Test are filtered and added to the training set to retrain the model. In the case where the accuracy of the model is already high, RNNCon-Test is still able to improve the accuracy of the model by up to 0.45%.
format Online
Article
Text
id pubmed-10048185
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-100481852023-03-29 RNNCon: Contribution Coverage Testing for Stacked Recurrent Neural Networks Du, Xiaoli Zeng, Hongwei Chen, Shengbo Lei, Zhou Entropy (Basel) Article Recurrent Neural Networks (RNNs) are applied in safety-critical fields such as autonomous driving, aircraft collision detection, and smart credit. They are highly susceptible to input perturbations, but little research on RNN-oriented testing techniques has been conducted, leaving a threat to a large number of sequential application domains. To address these gaps, improve the test adequacy of RNNs, find more defects, and improve the performance of RNNs models and their robustness to input perturbations. We aim to propose a test coverage metric for the underlying structure of RNNs, which is used to guide the generation of test inputs to test RNNs. Although coverage metrics have been proposed for RNNs, such as the hidden state coverage in RNN-Test, they ignore the fact that the underlying structure of RNNs is still a fully connected neural network but with an additional “delayer” that records the network state at the time of data input. We use the contributions, i.e., the combination of the outputs of neurons and the weights they emit, as the minimum computational unit of RNNs to explore the finer-grained logical structure inside the recurrent cells. Compared to existing coverage metrics, our research covers the decision mechanism of RNNs in more detail and is more likely to generate more adversarial samples and discover more flaws in the model. In this paper, we redefine the contribution coverage metric applicable to Stacked LSTMs and Stacked GRUs by considering the joint effect of neurons and weights in the underlying structure of the neural network. We propose a new coverage metric, RNNCon, which can be used to guide the generation of adversarial test inputs. And we design and implement a test framework prototype RNNCon-Test. 2 datasets, 4 LSTM models, and 4 GRU models are used to verify the effectiveness of RNNCon-Test. Compared to the current state-of-the-art study RNN-Test, RNNCon can cover a deeper decision logic of RNNs. RNNCon-Test is not only effective in identifying defects in Deep Learning (DL) systems but also in improving the performance of the model if the adversarial inputs generated by RNNCon-Test are filtered and added to the training set to retrain the model. In the case where the accuracy of the model is already high, RNNCon-Test is still able to improve the accuracy of the model by up to 0.45%. MDPI 2023-03-17 /pmc/articles/PMC10048185/ /pubmed/36981408 http://dx.doi.org/10.3390/e25030520 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Du, Xiaoli
Zeng, Hongwei
Chen, Shengbo
Lei, Zhou
RNNCon: Contribution Coverage Testing for Stacked Recurrent Neural Networks
title RNNCon: Contribution Coverage Testing for Stacked Recurrent Neural Networks
title_full RNNCon: Contribution Coverage Testing for Stacked Recurrent Neural Networks
title_fullStr RNNCon: Contribution Coverage Testing for Stacked Recurrent Neural Networks
title_full_unstemmed RNNCon: Contribution Coverage Testing for Stacked Recurrent Neural Networks
title_short RNNCon: Contribution Coverage Testing for Stacked Recurrent Neural Networks
title_sort rnncon: contribution coverage testing for stacked recurrent neural networks
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10048185/
https://www.ncbi.nlm.nih.gov/pubmed/36981408
http://dx.doi.org/10.3390/e25030520
work_keys_str_mv AT duxiaoli rnnconcontributioncoveragetestingforstackedrecurrentneuralnetworks
AT zenghongwei rnnconcontributioncoveragetestingforstackedrecurrentneuralnetworks
AT chenshengbo rnnconcontributioncoveragetestingforstackedrecurrentneuralnetworks
AT leizhou rnnconcontributioncoveragetestingforstackedrecurrentneuralnetworks