Cargando…

Efficient Integrity-Tree Structure for Convolutional Neural Networks through Frequent Counter Overflow Prevention in Secure Memories

Advancements in convolutional neural network (CNN) have resulted in remarkable success in various computing fields. However, the need to protect data against external security attacks has become increasingly important because inference process in CNNs exploit sensitive data. Secure Memory is a hardw...

Descripción completa

Detalles Bibliográficos
Autores principales: Kim, Jesung, Lee, Wonyoung, Hong, Jeongkyu, Kim, Soontae
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9694360/
https://www.ncbi.nlm.nih.gov/pubmed/36433359
http://dx.doi.org/10.3390/s22228762
_version_ 1784837780301414400
author Kim, Jesung
Lee, Wonyoung
Hong, Jeongkyu
Kim, Soontae
author_facet Kim, Jesung
Lee, Wonyoung
Hong, Jeongkyu
Kim, Soontae
author_sort Kim, Jesung
collection PubMed
description Advancements in convolutional neural network (CNN) have resulted in remarkable success in various computing fields. However, the need to protect data against external security attacks has become increasingly important because inference process in CNNs exploit sensitive data. Secure Memory is a hardware-based protection technique that can protect the sensitive data of CNNs. However, naively applying secure memory to a CNN application causes significant performance and energy overhead. Furthermore, ensuring secure memory becomes more difficult in environments that require area efficiency and low-power execution, such as the Internet of Things (IoT). In this paper, we investigated memory access patterns for CNN workloads and analyzed their effects on secure memory performance. According to our observations, most CNN workloads intensively write to narrow memory regions, which can cause a considerable number of counter overflows. On average, 87.6% of total writes occur in 6.8% of the allocated memory space; in the extreme case, 93.9% of total writes occur in 1.4% of the allocated memory space. Based on our observations, we propose an efficient integrity-tree structure called Countermark-tree that is suitable for CNN workloads. The proposed technique reduces overall energy consumption by 48%, shows a performance improvement of 11.2% compared to VAULT-128, and requires a similar integrity-tree size to VAULT-64, a state-of-the-art technique.
format Online
Article
Text
id pubmed-9694360
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-96943602022-11-26 Efficient Integrity-Tree Structure for Convolutional Neural Networks through Frequent Counter Overflow Prevention in Secure Memories Kim, Jesung Lee, Wonyoung Hong, Jeongkyu Kim, Soontae Sensors (Basel) Article Advancements in convolutional neural network (CNN) have resulted in remarkable success in various computing fields. However, the need to protect data against external security attacks has become increasingly important because inference process in CNNs exploit sensitive data. Secure Memory is a hardware-based protection technique that can protect the sensitive data of CNNs. However, naively applying secure memory to a CNN application causes significant performance and energy overhead. Furthermore, ensuring secure memory becomes more difficult in environments that require area efficiency and low-power execution, such as the Internet of Things (IoT). In this paper, we investigated memory access patterns for CNN workloads and analyzed their effects on secure memory performance. According to our observations, most CNN workloads intensively write to narrow memory regions, which can cause a considerable number of counter overflows. On average, 87.6% of total writes occur in 6.8% of the allocated memory space; in the extreme case, 93.9% of total writes occur in 1.4% of the allocated memory space. Based on our observations, we propose an efficient integrity-tree structure called Countermark-tree that is suitable for CNN workloads. The proposed technique reduces overall energy consumption by 48%, shows a performance improvement of 11.2% compared to VAULT-128, and requires a similar integrity-tree size to VAULT-64, a state-of-the-art technique. MDPI 2022-11-13 /pmc/articles/PMC9694360/ /pubmed/36433359 http://dx.doi.org/10.3390/s22228762 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Kim, Jesung
Lee, Wonyoung
Hong, Jeongkyu
Kim, Soontae
Efficient Integrity-Tree Structure for Convolutional Neural Networks through Frequent Counter Overflow Prevention in Secure Memories
title Efficient Integrity-Tree Structure for Convolutional Neural Networks through Frequent Counter Overflow Prevention in Secure Memories
title_full Efficient Integrity-Tree Structure for Convolutional Neural Networks through Frequent Counter Overflow Prevention in Secure Memories
title_fullStr Efficient Integrity-Tree Structure for Convolutional Neural Networks through Frequent Counter Overflow Prevention in Secure Memories
title_full_unstemmed Efficient Integrity-Tree Structure for Convolutional Neural Networks through Frequent Counter Overflow Prevention in Secure Memories
title_short Efficient Integrity-Tree Structure for Convolutional Neural Networks through Frequent Counter Overflow Prevention in Secure Memories
title_sort efficient integrity-tree structure for convolutional neural networks through frequent counter overflow prevention in secure memories
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9694360/
https://www.ncbi.nlm.nih.gov/pubmed/36433359
http://dx.doi.org/10.3390/s22228762
work_keys_str_mv AT kimjesung efficientintegritytreestructureforconvolutionalneuralnetworksthroughfrequentcounteroverflowpreventioninsecurememories
AT leewonyoung efficientintegritytreestructureforconvolutionalneuralnetworksthroughfrequentcounteroverflowpreventioninsecurememories
AT hongjeongkyu efficientintegritytreestructureforconvolutionalneuralnetworksthroughfrequentcounteroverflowpreventioninsecurememories
AT kimsoontae efficientintegritytreestructureforconvolutionalneuralnetworksthroughfrequentcounteroverflowpreventioninsecurememories