Cargando…

Quantitative Methods for Data Driven Reliability Optimization of Engineered Systems

Particle accelerators, such as the Large Hadron Collider at CERN, are among the largest and most complex engineered systems to date. Future generations of particle accelerators are expected to increase in size, complexity, and cost. Among the many obstacles, this introduces unprecedented reliability...

Descripción completa

Detalles Bibliográficos
Autor principal: Felsberger, Lukas
Lenguaje:eng
Publicado: Universitätsbibliothek LMU München 2021
Materias:
Acceso en línea:http://cds.cern.ch/record/2766016
_version_ 1780971193267912704
author Felsberger, Lukas
author_facet Felsberger, Lukas
author_sort Felsberger, Lukas
collection CERN
description Particle accelerators, such as the Large Hadron Collider at CERN, are among the largest and most complex engineered systems to date. Future generations of particle accelerators are expected to increase in size, complexity, and cost. Among the many obstacles, this introduces unprecedented reliability challenges and requires new reliability optimization approaches. With the increasing level of digitalization of technical infrastructures, the rate and granularity of operational data collection is rapidly growing. These data contain valuable information for system reliability optimization, which can be extracted and processed with data-science methods and algorithms. However, many existing data-driven reliability optimization methods fail to exploit these data, because they make too simplistic assumptions of the system behavior, do not consider organizational contexts for cost-effectiveness, and build on specific monitoring data, which are too expensive to record. To address these limitations in realistic scenarios, a tailored methodology based on CRISP-DM (CRoss-Industry Standard Process for Data Mining) is proposed to develop data-driven reliability optimization methods. For three realistic scenarios, the developed methods use the available operational data to learn interpretable or explainable failure models that allow to derive permanent and generally applicable reliability improvements: Firstly, novel explainable deep learning methods predict future alarms accurately from few logged alarm examples and support root-cause identification. Secondly, novel parametric reliability models allow to include expert knowledge for an improved quantification of failure behavior for a fleet of systems with heterogeneous operating conditions and derive optimal operational strategies for novel usage scenarios. Thirdly, Bayesian models trained on data from a range of comparable systems predict field reliability accurately and reveal non-technical factors' influence on reliability. An evaluation of the methods applied to the three scenarios confirms that the tailored CRISP-DM methodology advances the state-of-the-art in data-driven reliability optimization to overcome many existing limitations. However, the quality of the collected operational data remains crucial for the success of such approaches. Hence, adaptations of routine data collection procedures are suggested to enhance data quality and to increase the success rate of reliability optimization projects. With the developed methods and findings, future generations of particle accelerators can be constructed and operated cost-effectively, ensuring high levels of reliability despite growing system complexity.
id cern-2766016
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2021
publisher Universitätsbibliothek LMU München
record_format invenio
spelling cern-27660162021-05-10T20:12:20Zhttp://cds.cern.ch/record/2766016engFelsberger, LukasQuantitative Methods for Data Driven Reliability Optimization of Engineered SystemsComputing and ComputersEngineeringParticle accelerators, such as the Large Hadron Collider at CERN, are among the largest and most complex engineered systems to date. Future generations of particle accelerators are expected to increase in size, complexity, and cost. Among the many obstacles, this introduces unprecedented reliability challenges and requires new reliability optimization approaches. With the increasing level of digitalization of technical infrastructures, the rate and granularity of operational data collection is rapidly growing. These data contain valuable information for system reliability optimization, which can be extracted and processed with data-science methods and algorithms. However, many existing data-driven reliability optimization methods fail to exploit these data, because they make too simplistic assumptions of the system behavior, do not consider organizational contexts for cost-effectiveness, and build on specific monitoring data, which are too expensive to record. To address these limitations in realistic scenarios, a tailored methodology based on CRISP-DM (CRoss-Industry Standard Process for Data Mining) is proposed to develop data-driven reliability optimization methods. For three realistic scenarios, the developed methods use the available operational data to learn interpretable or explainable failure models that allow to derive permanent and generally applicable reliability improvements: Firstly, novel explainable deep learning methods predict future alarms accurately from few logged alarm examples and support root-cause identification. Secondly, novel parametric reliability models allow to include expert knowledge for an improved quantification of failure behavior for a fleet of systems with heterogeneous operating conditions and derive optimal operational strategies for novel usage scenarios. Thirdly, Bayesian models trained on data from a range of comparable systems predict field reliability accurately and reveal non-technical factors' influence on reliability. An evaluation of the methods applied to the three scenarios confirms that the tailored CRISP-DM methodology advances the state-of-the-art in data-driven reliability optimization to overcome many existing limitations. However, the quality of the collected operational data remains crucial for the success of such approaches. Hence, adaptations of routine data collection procedures are suggested to enhance data quality and to increase the success rate of reliability optimization projects. With the developed methods and findings, future generations of particle accelerators can be constructed and operated cost-effectively, ensuring high levels of reliability despite growing system complexity.Universitätsbibliothek LMU MünchenCERN-THESIS-2020-334oai:cds.cern.ch:27660162021-04-19
spellingShingle Computing and Computers
Engineering
Felsberger, Lukas
Quantitative Methods for Data Driven Reliability Optimization of Engineered Systems
title Quantitative Methods for Data Driven Reliability Optimization of Engineered Systems
title_full Quantitative Methods for Data Driven Reliability Optimization of Engineered Systems
title_fullStr Quantitative Methods for Data Driven Reliability Optimization of Engineered Systems
title_full_unstemmed Quantitative Methods for Data Driven Reliability Optimization of Engineered Systems
title_short Quantitative Methods for Data Driven Reliability Optimization of Engineered Systems
title_sort quantitative methods for data driven reliability optimization of engineered systems
topic Computing and Computers
Engineering
url http://cds.cern.ch/record/2766016
work_keys_str_mv AT felsbergerlukas quantitativemethodsfordatadrivenreliabilityoptimizationofengineeredsystems