Cargando…
Smart Data Placement Using Storage-as-a-Service Model for Big Data Pipelines
Big data pipelines are developed to process data characterized by one or more of the three big data features, commonly known as the three Vs (volume, velocity, and variety), through a series of steps (e.g., extract, transform, and move), making the ground work for the use of advanced analytics and M...
Autores principales: | Khan, Akif Quddus, Nikolov, Nikolay, Matskin, Mihhail, Prodan, Radu, Roman, Dumitru, Sahin, Bekir, Bussler, Christoph, Soylu, Ahmet |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9863399/ https://www.ncbi.nlm.nih.gov/pubmed/36679360 http://dx.doi.org/10.3390/s23020564 |
Ejemplares similares
-
Big Data Workflows: Locality-Aware Orchestration Using Software Containers
por: Corodescu, Andrei-Alin, et al.
Publicado: (2021) -
From big data to smart data
por: Iafrate, Fernando
Publicado: (2015) -
BigDataScript: a scripting language for data pipelines
por: Cingolani, Pablo, et al.
Publicado: (2015) -
Big data and smart service systems
por: Liu, Xiwei, et al.
Publicado: (2016) -
Big data and smart digital environment
por: Farhaoui, Yousef, et al.
Publicado: (2019)