Cargando…

Smart Data Placement Using Storage-as-a-Service Model for Big Data Pipelines

Big data pipelines are developed to process data characterized by one or more of the three big data features, commonly known as the three Vs (volume, velocity, and variety), through a series of steps (e.g., extract, transform, and move), making the ground work for the use of advanced analytics and M...

Descripción completa

Detalles Bibliográficos
Autores principales: Khan, Akif Quddus, Nikolov, Nikolay, Matskin, Mihhail, Prodan, Radu, Roman, Dumitru, Sahin, Bekir, Bussler, Christoph, Soylu, Ahmet
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9863399/
https://www.ncbi.nlm.nih.gov/pubmed/36679360
http://dx.doi.org/10.3390/s23020564
_version_ 1784875324580823040
author Khan, Akif Quddus
Nikolov, Nikolay
Matskin, Mihhail
Prodan, Radu
Roman, Dumitru
Sahin, Bekir
Bussler, Christoph
Soylu, Ahmet
author_facet Khan, Akif Quddus
Nikolov, Nikolay
Matskin, Mihhail
Prodan, Radu
Roman, Dumitru
Sahin, Bekir
Bussler, Christoph
Soylu, Ahmet
author_sort Khan, Akif Quddus
collection PubMed
description Big data pipelines are developed to process data characterized by one or more of the three big data features, commonly known as the three Vs (volume, velocity, and variety), through a series of steps (e.g., extract, transform, and move), making the ground work for the use of advanced analytics and ML/AI techniques. Computing continuum (i.e., cloud/fog/edge) allows access to virtually infinite amount of resources, where data pipelines could be executed at scale; however, the implementation of data pipelines on the continuum is a complex task that needs to take computing resources, data transmission channels, triggers, data transfer methods, integration of message queues, etc., into account. The task becomes even more challenging when data storage is considered as part of the data pipelines. Local storage is expensive, hard to maintain, and comes with several challenges (e.g., data availability, data security, and backup). The use of cloud storage, i.e., storage-as-a-service (StaaS), instead of local storage has the potential of providing more flexibility in terms of scalability, fault tolerance, and availability. In this article, we propose a generic approach to integrate StaaS with data pipelines, i.e., computation on an on-premise server or on a specific cloud, but integration with StaaS, and develop a ranking method for available storage options based on five key parameters: cost, proximity, network performance, server-side encryption, and user weights/preferences. The evaluation carried out demonstrates the effectiveness of the proposed approach in terms of data transfer performance, utility of the individual parameters, and feasibility of dynamic selection of a storage option based on four primary user scenarios.
format Online
Article
Text
id pubmed-9863399
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-98633992023-01-22 Smart Data Placement Using Storage-as-a-Service Model for Big Data Pipelines Khan, Akif Quddus Nikolov, Nikolay Matskin, Mihhail Prodan, Radu Roman, Dumitru Sahin, Bekir Bussler, Christoph Soylu, Ahmet Sensors (Basel) Article Big data pipelines are developed to process data characterized by one or more of the three big data features, commonly known as the three Vs (volume, velocity, and variety), through a series of steps (e.g., extract, transform, and move), making the ground work for the use of advanced analytics and ML/AI techniques. Computing continuum (i.e., cloud/fog/edge) allows access to virtually infinite amount of resources, where data pipelines could be executed at scale; however, the implementation of data pipelines on the continuum is a complex task that needs to take computing resources, data transmission channels, triggers, data transfer methods, integration of message queues, etc., into account. The task becomes even more challenging when data storage is considered as part of the data pipelines. Local storage is expensive, hard to maintain, and comes with several challenges (e.g., data availability, data security, and backup). The use of cloud storage, i.e., storage-as-a-service (StaaS), instead of local storage has the potential of providing more flexibility in terms of scalability, fault tolerance, and availability. In this article, we propose a generic approach to integrate StaaS with data pipelines, i.e., computation on an on-premise server or on a specific cloud, but integration with StaaS, and develop a ranking method for available storage options based on five key parameters: cost, proximity, network performance, server-side encryption, and user weights/preferences. The evaluation carried out demonstrates the effectiveness of the proposed approach in terms of data transfer performance, utility of the individual parameters, and feasibility of dynamic selection of a storage option based on four primary user scenarios. MDPI 2023-01-04 /pmc/articles/PMC9863399/ /pubmed/36679360 http://dx.doi.org/10.3390/s23020564 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Khan, Akif Quddus
Nikolov, Nikolay
Matskin, Mihhail
Prodan, Radu
Roman, Dumitru
Sahin, Bekir
Bussler, Christoph
Soylu, Ahmet
Smart Data Placement Using Storage-as-a-Service Model for Big Data Pipelines
title Smart Data Placement Using Storage-as-a-Service Model for Big Data Pipelines
title_full Smart Data Placement Using Storage-as-a-Service Model for Big Data Pipelines
title_fullStr Smart Data Placement Using Storage-as-a-Service Model for Big Data Pipelines
title_full_unstemmed Smart Data Placement Using Storage-as-a-Service Model for Big Data Pipelines
title_short Smart Data Placement Using Storage-as-a-Service Model for Big Data Pipelines
title_sort smart data placement using storage-as-a-service model for big data pipelines
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9863399/
https://www.ncbi.nlm.nih.gov/pubmed/36679360
http://dx.doi.org/10.3390/s23020564
work_keys_str_mv AT khanakifquddus smartdataplacementusingstorageasaservicemodelforbigdatapipelines
AT nikolovnikolay smartdataplacementusingstorageasaservicemodelforbigdatapipelines
AT matskinmihhail smartdataplacementusingstorageasaservicemodelforbigdatapipelines
AT prodanradu smartdataplacementusingstorageasaservicemodelforbigdatapipelines
AT romandumitru smartdataplacementusingstorageasaservicemodelforbigdatapipelines
AT sahinbekir smartdataplacementusingstorageasaservicemodelforbigdatapipelines
AT busslerchristoph smartdataplacementusingstorageasaservicemodelforbigdatapipelines
AT soyluahmet smartdataplacementusingstorageasaservicemodelforbigdatapipelines