Cargando…

JsrNet: A Joint Sampling–Reconstruction Framework for Distributed Compressive Video Sensing

Huge video data has posed great challenges on computing power and storage space, triggering the emergence of distributed compressive video sensing (DCVS). Hardware-friendly characteristics of this technique have consolidated its position as one of the most powerful architectures in source-limited sc...

Descripción completa

Detalles Bibliográficos
Autores principales: Chen, Can, Wu, Yutong, Zhou, Chao, Zhang, Dengyin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6983164/
https://www.ncbi.nlm.nih.gov/pubmed/31905916
http://dx.doi.org/10.3390/s20010206
_version_ 1783491456877461504
author Chen, Can
Wu, Yutong
Zhou, Chao
Zhang, Dengyin
author_facet Chen, Can
Wu, Yutong
Zhou, Chao
Zhang, Dengyin
author_sort Chen, Can
collection PubMed
description Huge video data has posed great challenges on computing power and storage space, triggering the emergence of distributed compressive video sensing (DCVS). Hardware-friendly characteristics of this technique have consolidated its position as one of the most powerful architectures in source-limited scenarios, namely, wireless video sensor networks (WVSNs). Recently, deep convolutional neural networks (DCNNs) are successfully applied in DCVS because traditional optimization-based methods are computationally elaborate and hard to meet the requirements of real-time applications. In this paper, we propose a joint sampling–reconstruction framework for DCVS, named “JsrNet”. JsrNet utilizes the whole group of frames as the reference to reconstruct each frame, regardless of key frames and non-key frames, while the existing frameworks only utilize key frames as the reference to reconstruct non-key frames. Moreover, different from the existing frameworks which only focus on exploiting complementary information between frames in joint reconstruction, JsrNet also applies this conception in joint sampling by adopting learnable convolutions to sample multiple frames jointly and simultaneously in an encoder. JsrNet fully exploits spatial–temporal correlation in both sampling and reconstruction, and achieves a competitive performance in both the quality of reconstruction and computational complexity, making it a promising candidate in source-limited, real-time scenarios.
format Online
Article
Text
id pubmed-6983164
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-69831642020-02-06 JsrNet: A Joint Sampling–Reconstruction Framework for Distributed Compressive Video Sensing Chen, Can Wu, Yutong Zhou, Chao Zhang, Dengyin Sensors (Basel) Article Huge video data has posed great challenges on computing power and storage space, triggering the emergence of distributed compressive video sensing (DCVS). Hardware-friendly characteristics of this technique have consolidated its position as one of the most powerful architectures in source-limited scenarios, namely, wireless video sensor networks (WVSNs). Recently, deep convolutional neural networks (DCNNs) are successfully applied in DCVS because traditional optimization-based methods are computationally elaborate and hard to meet the requirements of real-time applications. In this paper, we propose a joint sampling–reconstruction framework for DCVS, named “JsrNet”. JsrNet utilizes the whole group of frames as the reference to reconstruct each frame, regardless of key frames and non-key frames, while the existing frameworks only utilize key frames as the reference to reconstruct non-key frames. Moreover, different from the existing frameworks which only focus on exploiting complementary information between frames in joint reconstruction, JsrNet also applies this conception in joint sampling by adopting learnable convolutions to sample multiple frames jointly and simultaneously in an encoder. JsrNet fully exploits spatial–temporal correlation in both sampling and reconstruction, and achieves a competitive performance in both the quality of reconstruction and computational complexity, making it a promising candidate in source-limited, real-time scenarios. MDPI 2019-12-30 /pmc/articles/PMC6983164/ /pubmed/31905916 http://dx.doi.org/10.3390/s20010206 Text en © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Chen, Can
Wu, Yutong
Zhou, Chao
Zhang, Dengyin
JsrNet: A Joint Sampling–Reconstruction Framework for Distributed Compressive Video Sensing
title JsrNet: A Joint Sampling–Reconstruction Framework for Distributed Compressive Video Sensing
title_full JsrNet: A Joint Sampling–Reconstruction Framework for Distributed Compressive Video Sensing
title_fullStr JsrNet: A Joint Sampling–Reconstruction Framework for Distributed Compressive Video Sensing
title_full_unstemmed JsrNet: A Joint Sampling–Reconstruction Framework for Distributed Compressive Video Sensing
title_short JsrNet: A Joint Sampling–Reconstruction Framework for Distributed Compressive Video Sensing
title_sort jsrnet: a joint sampling–reconstruction framework for distributed compressive video sensing
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6983164/
https://www.ncbi.nlm.nih.gov/pubmed/31905916
http://dx.doi.org/10.3390/s20010206
work_keys_str_mv AT chencan jsrnetajointsamplingreconstructionframeworkfordistributedcompressivevideosensing
AT wuyutong jsrnetajointsamplingreconstructionframeworkfordistributedcompressivevideosensing
AT zhouchao jsrnetajointsamplingreconstructionframeworkfordistributedcompressivevideosensing
AT zhangdengyin jsrnetajointsamplingreconstructionframeworkfordistributedcompressivevideosensing