Cargando…

Distributed Visual Crowdsensing Framework for Area Coverage in Resource Constrained Environments

Visual crowdsensing applications using built-in cameras in smartphones have recently attracted researchers’ interest. Making the most out of the limited resources to acquire the most helpful images from the public is a challenge in disaster recovery applications. Proposed solutions should adequately...

Descripción completa

Detalles Bibliográficos
Autores principales: Mowafi, Moad, Awad, Fahed, Al-Quran, Fida’a
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9330599/
https://www.ncbi.nlm.nih.gov/pubmed/35897971
http://dx.doi.org/10.3390/s22155467
_version_ 1784758201432932352
author Mowafi, Moad
Awad, Fahed
Al-Quran, Fida’a
author_facet Mowafi, Moad
Awad, Fahed
Al-Quran, Fida’a
author_sort Mowafi, Moad
collection PubMed
description Visual crowdsensing applications using built-in cameras in smartphones have recently attracted researchers’ interest. Making the most out of the limited resources to acquire the most helpful images from the public is a challenge in disaster recovery applications. Proposed solutions should adequately address several constraints, including limited bandwidth, limited energy resources, and interrupted communication links with the command center or server. Furthermore, data redundancy is considered one of the main challenges in visual crowdsensing. In distributed visual crowdsensing systems, photo sharing duplicates and expands the amount of data stored on each sensor node. As a result, if any node can communicate with the server, then more photos of the target region would be available to the server. Methods for recognizing and removing redundant data provide a range of benefits, including decreased transmission costs and energy consumption overall. To handle the interrupted communication with the server and the restricted resources of the sensor nodes, this paper proposes a distributed visual crowdsensing system for full-view area coverage. The target area is divided into virtual sub-regions, each of which is represented by a set of boundary points of interest. Then, based on the criteria for full-view area coverage, a specific data structure theme is developed to represent each photo with a set of features. The geometric context parameters of each photo are utilized to extract the features of each photo based on the full-view area coverage criteria. Finally, data redundancy removal algorithms are implemented based on the proposed clustering scheme to eliminate duplicate photos. As a result, each sensor node may filter redundant photographs in dispersed contexts without requiring high computational complexity, resources, or global awareness of all photos from all sensor nodes inside the target area. Compared to the most recent state-of-the-art, the improvement ratio of the added values of the photos provided by the proposed method is more than 38%. In terms of traffic transfer, the proposed method requires fewer data to be transferred between sensor nodes and between sensor nodes and the command center. The overall reduction in traffic exceeds 20% and the overall savings in energy consumption is more than 25%. It was evident that in the proposed system, sending photos between sensor nodes, as well as between sensor nodes and the command center, consumes less energy than existing approaches due to the considerable amount of photo exchange required. Thus, the proposed technique effectively transfers only the most valuable photos needed.
format Online
Article
Text
id pubmed-9330599
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-93305992022-07-29 Distributed Visual Crowdsensing Framework for Area Coverage in Resource Constrained Environments Mowafi, Moad Awad, Fahed Al-Quran, Fida’a Sensors (Basel) Article Visual crowdsensing applications using built-in cameras in smartphones have recently attracted researchers’ interest. Making the most out of the limited resources to acquire the most helpful images from the public is a challenge in disaster recovery applications. Proposed solutions should adequately address several constraints, including limited bandwidth, limited energy resources, and interrupted communication links with the command center or server. Furthermore, data redundancy is considered one of the main challenges in visual crowdsensing. In distributed visual crowdsensing systems, photo sharing duplicates and expands the amount of data stored on each sensor node. As a result, if any node can communicate with the server, then more photos of the target region would be available to the server. Methods for recognizing and removing redundant data provide a range of benefits, including decreased transmission costs and energy consumption overall. To handle the interrupted communication with the server and the restricted resources of the sensor nodes, this paper proposes a distributed visual crowdsensing system for full-view area coverage. The target area is divided into virtual sub-regions, each of which is represented by a set of boundary points of interest. Then, based on the criteria for full-view area coverage, a specific data structure theme is developed to represent each photo with a set of features. The geometric context parameters of each photo are utilized to extract the features of each photo based on the full-view area coverage criteria. Finally, data redundancy removal algorithms are implemented based on the proposed clustering scheme to eliminate duplicate photos. As a result, each sensor node may filter redundant photographs in dispersed contexts without requiring high computational complexity, resources, or global awareness of all photos from all sensor nodes inside the target area. Compared to the most recent state-of-the-art, the improvement ratio of the added values of the photos provided by the proposed method is more than 38%. In terms of traffic transfer, the proposed method requires fewer data to be transferred between sensor nodes and between sensor nodes and the command center. The overall reduction in traffic exceeds 20% and the overall savings in energy consumption is more than 25%. It was evident that in the proposed system, sending photos between sensor nodes, as well as between sensor nodes and the command center, consumes less energy than existing approaches due to the considerable amount of photo exchange required. Thus, the proposed technique effectively transfers only the most valuable photos needed. MDPI 2022-07-22 /pmc/articles/PMC9330599/ /pubmed/35897971 http://dx.doi.org/10.3390/s22155467 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Mowafi, Moad
Awad, Fahed
Al-Quran, Fida’a
Distributed Visual Crowdsensing Framework for Area Coverage in Resource Constrained Environments
title Distributed Visual Crowdsensing Framework for Area Coverage in Resource Constrained Environments
title_full Distributed Visual Crowdsensing Framework for Area Coverage in Resource Constrained Environments
title_fullStr Distributed Visual Crowdsensing Framework for Area Coverage in Resource Constrained Environments
title_full_unstemmed Distributed Visual Crowdsensing Framework for Area Coverage in Resource Constrained Environments
title_short Distributed Visual Crowdsensing Framework for Area Coverage in Resource Constrained Environments
title_sort distributed visual crowdsensing framework for area coverage in resource constrained environments
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9330599/
https://www.ncbi.nlm.nih.gov/pubmed/35897971
http://dx.doi.org/10.3390/s22155467
work_keys_str_mv AT mowafimoad distributedvisualcrowdsensingframeworkforareacoverageinresourceconstrainedenvironments
AT awadfahed distributedvisualcrowdsensingframeworkforareacoverageinresourceconstrainedenvironments
AT alquranfidaa distributedvisualcrowdsensingframeworkforareacoverageinresourceconstrainedenvironments