Cargando…
Clustering Error Messages Produced by Distributed Computing Infrastructure During the Processing of High Energy Physics Data
Large-scale distributed computing infrastructures ensure the operation and maintenance of scientific experiments at the LHC: more than 160 computing centers all over the World execute tens of millions of computing jobs per day. ATLAS --- the largest experiment at the LHC --- creates an enormous flow...
Autor principal: | |
---|---|
Lenguaje: | eng |
Publicado: |
2020
|
Materias: | |
Acceso en línea: | http://cds.cern.ch/record/2742495 |
Sumario: | Large-scale distributed computing infrastructures ensure the operation and maintenance of scientific experiments at the LHC: more than 160 computing centers all over the World execute tens of millions of computing jobs per day. ATLAS --- the largest experiment at the LHC --- creates an enormous flow of data which has to be recorded and analyzed by complex heterogeneous and distributed computing environment. Statistically, about 10-12\% of computing jobs are finished with failures: network faults, service failures, security incidents, and other error conditions trigger error messages which provide detailed information about the issue that can be used for diagnosis and proactive fault handling. However, the analysis is complicated by the sheer scale of textual log data, often exacerbated by the lack of well-defined structure: human experts have to interpret detected messages and create the parsing rules manually, which is time-consuming and does not allow to identify previously unknown error conditions without further human intervention. This paper is dedicated to the description of the pipeline of methods for unsupervised clustering of multi-source error messages. The pipeline is data-driven, based on machine learning algorithms and executed in a fully automated way, allowing to categorize error messages sharing similar textual patterns and meaning. |
---|