Cargando…

CMS Analysis and Data Reduction with Apache Spark

Experimental Particle Physics has been at the forefront of analyzing the world's largest datasets for decades. The HEP community was among the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems for distributed data processing, collect...

Descripción completa

Detalles Bibliográficos
Autores principales: Gutsche, Oliver, Canali, Luca, Cremer, Illia, Cremonesi, Matteo, Elmer, Peter, Fisk, Ian, Girone, Maria, Jayatilaka, Bo, Kowalkowski, Jim, Khristenko, Viktor, Motesnitsalis, Evangelos, Pivarski, Jim, Sehrish, Saba, Surdy, Kacper, Svyatkovskiy, Alexey
Lenguaje:eng
Publicado: 2017
Materias:
Acceso en línea:https://dx.doi.org/10.1088/1742-6596/1085/4/042030
http://cds.cern.ch/record/2291606
_version_ 1780956406006939648
author Gutsche, Oliver
Canali, Luca
Cremer, Illia
Cremonesi, Matteo
Elmer, Peter
Fisk, Ian
Girone, Maria
Jayatilaka, Bo
Kowalkowski, Jim
Khristenko, Viktor
Motesnitsalis, Evangelos
Pivarski, Jim
Sehrish, Saba
Surdy, Kacper
Svyatkovskiy, Alexey
author_facet Gutsche, Oliver
Canali, Luca
Cremer, Illia
Cremonesi, Matteo
Elmer, Peter
Fisk, Ian
Girone, Maria
Jayatilaka, Bo
Kowalkowski, Jim
Khristenko, Viktor
Motesnitsalis, Evangelos
Pivarski, Jim
Sehrish, Saba
Surdy, Kacper
Svyatkovskiy, Alexey
author_sort Gutsche, Oliver
collection CERN
description Experimental Particle Physics has been at the forefront of analyzing the world's largest datasets for decades. The HEP community was among the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems for distributed data processing, collectively called "Big Data" technologies have emerged from industry and open source projects to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (filtering and transforming experiment-specific data formats), these new technologies use different approaches and tools, promising a fresh look at analysis of very large datasets that could potentially reduce the time-to-physics with increased interactivity. Moreover these new tools are typically actively developed by large communities, often profiting of industry resources, and under open source licensing. These factors result in a boost for adoption and maturity of the tools and for the communities supporting them, at the same time helping in reducing the cost of ownership for the end-users. In this talk, we are presenting studies of using Apache Spark for end user data analysis. We are studying the HEP analysis workflow separated into two thrusts: the reduction of centrally produced experiment datasets and the end-analysis up to the publication plot. Studying the first thrust, CMS is working together with CERN openlab and Intel on the CMS Big Data Reduction Facility. The goal is to reduce 1 PB of official CMS data to 1 TB of ntuple output for analysis. We are presenting the progress of this 2-year project with first results of scaling up Spark-based HEP analysis. Studying the second thrust, we are presenting studies on using Apache Spark for a CMS Dark Matter physics search, comparing Spark's feasibility, usability and performance to the ROOT-based analysis.
id cern-2291606
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2017
record_format invenio
spelling cern-22916062021-02-09T10:07:43Zdoi:10.1088/1742-6596/1085/4/042030http://cds.cern.ch/record/2291606engGutsche, OliverCanali, LucaCremer, IlliaCremonesi, MatteoElmer, PeterFisk, IanGirone, MariaJayatilaka, BoKowalkowski, JimKhristenko, ViktorMotesnitsalis, EvangelosPivarski, JimSehrish, SabaSurdy, KacperSvyatkovskiy, AlexeyCMS Analysis and Data Reduction with Apache Sparkcs.DCComputing and ComputersExperimental Particle Physics has been at the forefront of analyzing the world's largest datasets for decades. The HEP community was among the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems for distributed data processing, collectively called "Big Data" technologies have emerged from industry and open source projects to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (filtering and transforming experiment-specific data formats), these new technologies use different approaches and tools, promising a fresh look at analysis of very large datasets that could potentially reduce the time-to-physics with increased interactivity. Moreover these new tools are typically actively developed by large communities, often profiting of industry resources, and under open source licensing. These factors result in a boost for adoption and maturity of the tools and for the communities supporting them, at the same time helping in reducing the cost of ownership for the end-users. In this talk, we are presenting studies of using Apache Spark for end user data analysis. We are studying the HEP analysis workflow separated into two thrusts: the reduction of centrally produced experiment datasets and the end-analysis up to the publication plot. Studying the first thrust, CMS is working together with CERN openlab and Intel on the CMS Big Data Reduction Facility. The goal is to reduce 1 PB of official CMS data to 1 TB of ntuple output for analysis. We are presenting the progress of this 2-year project with first results of scaling up Spark-based HEP analysis. Studying the second thrust, we are presenting studies on using Apache Spark for a CMS Dark Matter physics search, comparing Spark's feasibility, usability and performance to the ROOT-based analysis.Experimental Particle Physics has been at the forefront of analyzing the world’s largest datasets for decades. The HEP community was among the first to develop suitable software and computing tools for this task. In recent times, new toolkits and systems for distributed data processing, collectively called “Big Data” technologies have emerged from industry and open source projects to support the analysis of Petabyte and Exabyte datasets in industry. While the principles of data analysis in HEP have not changed (filtering and transforming experiment-specific data formats), these new technologies use different approaches and tools, promising a fresh look at analysis of very large datasets that could potentially reduce the time-to-physics with increased interactivity. Moreover these new tools are typically actively developed by large communities, often profiting of industry resources, and under open source licensing. These factors result in a boost for adoption and maturity of the tools and for the communities supporting them, at the same time helping in reducing the cost of ownership for the end users. In this talk, we are presenting studies of using Apache Spark for end user data analysis. We are studying the HEP analysis workflow separated into two thrusts: the reduction of centrally produced experiment datasets and the end analysis up to the publication plot. Studying the first thrust, CMS is working together with CERN openlab and Intel on the CMS Big Data Reduction Facility. The goal is to reduce 1 PB of official CMS data to 1 TB of ntuple output for analysis. We are presenting the progress of this 2-year project with first results of scaling up Spark-based HEP analysis. Studying the second thrust, we are presenting studies on using Apache Spark for a CMS Dark Matter physics search, investigating Spark’s feasibility, usability and performance compared to the traditional ROOT-based analysis.arXiv:1711.00375FERMILAB-CONF-17-465-CDoai:cds.cern.ch:22916062017-10-31
spellingShingle cs.DC
Computing and Computers
Gutsche, Oliver
Canali, Luca
Cremer, Illia
Cremonesi, Matteo
Elmer, Peter
Fisk, Ian
Girone, Maria
Jayatilaka, Bo
Kowalkowski, Jim
Khristenko, Viktor
Motesnitsalis, Evangelos
Pivarski, Jim
Sehrish, Saba
Surdy, Kacper
Svyatkovskiy, Alexey
CMS Analysis and Data Reduction with Apache Spark
title CMS Analysis and Data Reduction with Apache Spark
title_full CMS Analysis and Data Reduction with Apache Spark
title_fullStr CMS Analysis and Data Reduction with Apache Spark
title_full_unstemmed CMS Analysis and Data Reduction with Apache Spark
title_short CMS Analysis and Data Reduction with Apache Spark
title_sort cms analysis and data reduction with apache spark
topic cs.DC
Computing and Computers
url https://dx.doi.org/10.1088/1742-6596/1085/4/042030
http://cds.cern.ch/record/2291606
work_keys_str_mv AT gutscheoliver cmsanalysisanddatareductionwithapachespark
AT canaliluca cmsanalysisanddatareductionwithapachespark
AT cremerillia cmsanalysisanddatareductionwithapachespark
AT cremonesimatteo cmsanalysisanddatareductionwithapachespark
AT elmerpeter cmsanalysisanddatareductionwithapachespark
AT fiskian cmsanalysisanddatareductionwithapachespark
AT gironemaria cmsanalysisanddatareductionwithapachespark
AT jayatilakabo cmsanalysisanddatareductionwithapachespark
AT kowalkowskijim cmsanalysisanddatareductionwithapachespark
AT khristenkoviktor cmsanalysisanddatareductionwithapachespark
AT motesnitsalisevangelos cmsanalysisanddatareductionwithapachespark
AT pivarskijim cmsanalysisanddatareductionwithapachespark
AT sehrishsaba cmsanalysisanddatareductionwithapachespark
AT surdykacper cmsanalysisanddatareductionwithapachespark
AT svyatkovskiyalexey cmsanalysisanddatareductionwithapachespark